Test Report: KVM_Linux_crio 18429

                    
                      ce47e36c27c610c668eed9e63157fcf5091ee2ba:2024-03-18:33630
                    
                

Test fail (31/271)

Order failed test Duration
39 TestAddons/parallel/Ingress 158.79
53 TestAddons/StoppedEnableDisable 154.33
96 TestFunctional/parallel/DashboardCmd 4.89
142 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 10.8
172 TestMultiControlPlane/serial/StopSecondaryNode 142.19
174 TestMultiControlPlane/serial/RestartSecondaryNode 53.09
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 387.88
179 TestMultiControlPlane/serial/StopCluster 142.29
239 TestMultiNode/serial/RestartKeepsNodes 306.03
241 TestMultiNode/serial/StopMultiNode 141.74
248 TestPreload 252.74
256 TestKubernetesUpgrade 432.09
291 TestPause/serial/SecondStartNoReconfiguration 168.65
293 TestStartStop/group/old-k8s-version/serial/FirstStart 272.33
302 TestStartStop/group/embed-certs/serial/Stop 139.13
305 TestStartStop/group/no-preload/serial/Stop 138.95
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.98
309 TestStartStop/group/old-k8s-version/serial/DeployApp 0.51
310 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 109.58
311 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
317 TestStartStop/group/old-k8s-version/serial/SecondStart 765.8
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
320 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.26
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.33
322 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.31
323 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.46
324 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 319.3
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 391.73
326 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 343.49
327 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 105.08
x
+
TestAddons/parallel/Ingress (158.79s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-015389 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-015389 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-015389 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0fbd4ea0-863b-41bc-b038-eb32bc6f8df0] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0fbd4ea0-863b-41bc-b038-eb32bc6f8df0] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 17.003897892s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-015389 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.647961867s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-015389 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.94
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-015389 addons disable ingress-dns --alsologtostderr -v=1: (1.74769538s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-015389 addons disable ingress --alsologtostderr -v=1: (8.061462653s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-015389 -n addons-015389
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-015389 logs -n 25: (1.427021663s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-508209                                                                     | download-only-508209 | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC | 18 Mar 24 12:16 UTC |
	| delete  | -p download-only-059065                                                                     | download-only-059065 | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC | 18 Mar 24 12:16 UTC |
	| delete  | -p download-only-222661                                                                     | download-only-222661 | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC | 18 Mar 24 12:16 UTC |
	| delete  | -p download-only-508209                                                                     | download-only-508209 | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC | 18 Mar 24 12:16 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-012709 | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC |                     |
	|         | binary-mirror-012709                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:35873                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-012709                                                                     | binary-mirror-012709 | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC | 18 Mar 24 12:16 UTC |
	| addons  | disable dashboard -p                                                                        | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC |                     |
	|         | addons-015389                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC |                     |
	|         | addons-015389                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-015389 --wait=true                                                                | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC | 18 Mar 24 12:19 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-015389 addons                                                                        | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:19 UTC | 18 Mar 24 12:19 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:19 UTC | 18 Mar 24 12:19 UTC |
	|         | -p addons-015389                                                                            |                      |         |         |                     |                     |
	| ip      | addons-015389 ip                                                                            | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:19 UTC | 18 Mar 24 12:19 UTC |
	| addons  | addons-015389 addons disable                                                                | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:19 UTC | 18 Mar 24 12:19 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:19 UTC | 18 Mar 24 12:19 UTC |
	|         | addons-015389                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:19 UTC | 18 Mar 24 12:19 UTC |
	|         | -p addons-015389                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-015389 ssh cat                                                                       | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:19 UTC | 18 Mar 24 12:19 UTC |
	|         | /opt/local-path-provisioner/pvc-97b8889f-377d-4dcd-aaa5-16575540db1e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-015389 addons disable                                                                | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:19 UTC | 18 Mar 24 12:20 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-015389 addons disable                                                                | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:20 UTC | 18 Mar 24 12:20 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:20 UTC | 18 Mar 24 12:20 UTC |
	|         | addons-015389                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-015389 ssh curl -s                                                                   | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:20 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-015389 addons                                                                        | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:20 UTC | 18 Mar 24 12:20 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-015389 addons                                                                        | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:20 UTC | 18 Mar 24 12:20 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-015389 ip                                                                            | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:22 UTC | 18 Mar 24 12:22 UTC |
	| addons  | addons-015389 addons disable                                                                | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:22 UTC | 18 Mar 24 12:22 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-015389 addons disable                                                                | addons-015389        | jenkins | v1.32.0 | 18 Mar 24 12:22 UTC | 18 Mar 24 12:22 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:16:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:16:54.732043 1115022 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:16:54.732161 1115022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:16:54.732174 1115022 out.go:304] Setting ErrFile to fd 2...
	I0318 12:16:54.732181 1115022 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:16:54.732426 1115022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:16:54.733144 1115022 out.go:298] Setting JSON to false
	I0318 12:16:54.734224 1115022 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14362,"bootTime":1710749853,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:16:54.734297 1115022 start.go:139] virtualization: kvm guest
	I0318 12:16:54.736409 1115022 out.go:177] * [addons-015389] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:16:54.738360 1115022 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 12:16:54.739648 1115022 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:16:54.738384 1115022 notify.go:220] Checking for updates...
	I0318 12:16:54.742286 1115022 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:16:54.743630 1115022 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:16:54.744895 1115022 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 12:16:54.746167 1115022 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:16:54.747510 1115022 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:16:54.778907 1115022 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 12:16:54.780296 1115022 start.go:297] selected driver: kvm2
	I0318 12:16:54.780319 1115022 start.go:901] validating driver "kvm2" against <nil>
	I0318 12:16:54.780346 1115022 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:16:54.781025 1115022 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:16:54.781122 1115022 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:16:54.795795 1115022 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:16:54.795850 1115022 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:16:54.796054 1115022 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:16:54.796122 1115022 cni.go:84] Creating CNI manager for ""
	I0318 12:16:54.796135 1115022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:16:54.796143 1115022 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 12:16:54.796202 1115022 start.go:340] cluster config:
	{Name:addons-015389 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-015389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:16:54.796316 1115022 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:16:54.798029 1115022 out.go:177] * Starting "addons-015389" primary control-plane node in "addons-015389" cluster
	I0318 12:16:54.799068 1115022 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:16:54.799107 1115022 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 12:16:54.799122 1115022 cache.go:56] Caching tarball of preloaded images
	I0318 12:16:54.799192 1115022 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 12:16:54.799203 1115022 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 12:16:54.799495 1115022 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/config.json ...
	I0318 12:16:54.799515 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/config.json: {Name:mk1bf82714506f9569168e7655e9066cf4e3d91d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:16:54.799649 1115022 start.go:360] acquireMachinesLock for addons-015389: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:16:54.799692 1115022 start.go:364] duration metric: took 29.573µs to acquireMachinesLock for "addons-015389"
	I0318 12:16:54.799712 1115022 start.go:93] Provisioning new machine with config: &{Name:addons-015389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-015389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:16:54.799766 1115022 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 12:16:54.801367 1115022 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0318 12:16:54.801488 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:16:54.801526 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:16:54.815731 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38281
	I0318 12:16:54.816283 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:16:54.816862 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:16:54.816886 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:16:54.817298 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:16:54.817501 1115022 main.go:141] libmachine: (addons-015389) Calling .GetMachineName
	I0318 12:16:54.817641 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:16:54.817833 1115022 start.go:159] libmachine.API.Create for "addons-015389" (driver="kvm2")
	I0318 12:16:54.817870 1115022 client.go:168] LocalClient.Create starting
	I0318 12:16:54.817940 1115022 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem
	I0318 12:16:55.003400 1115022 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem
	I0318 12:16:55.231357 1115022 main.go:141] libmachine: Running pre-create checks...
	I0318 12:16:55.231384 1115022 main.go:141] libmachine: (addons-015389) Calling .PreCreateCheck
	I0318 12:16:55.231939 1115022 main.go:141] libmachine: (addons-015389) Calling .GetConfigRaw
	I0318 12:16:55.232418 1115022 main.go:141] libmachine: Creating machine...
	I0318 12:16:55.232435 1115022 main.go:141] libmachine: (addons-015389) Calling .Create
	I0318 12:16:55.232573 1115022 main.go:141] libmachine: (addons-015389) Creating KVM machine...
	I0318 12:16:55.233763 1115022 main.go:141] libmachine: (addons-015389) DBG | found existing default KVM network
	I0318 12:16:55.234488 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:55.234346 1115045 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0318 12:16:55.234525 1115022 main.go:141] libmachine: (addons-015389) DBG | created network xml: 
	I0318 12:16:55.234553 1115022 main.go:141] libmachine: (addons-015389) DBG | <network>
	I0318 12:16:55.234593 1115022 main.go:141] libmachine: (addons-015389) DBG |   <name>mk-addons-015389</name>
	I0318 12:16:55.234612 1115022 main.go:141] libmachine: (addons-015389) DBG |   <dns enable='no'/>
	I0318 12:16:55.234622 1115022 main.go:141] libmachine: (addons-015389) DBG |   
	I0318 12:16:55.234634 1115022 main.go:141] libmachine: (addons-015389) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 12:16:55.234645 1115022 main.go:141] libmachine: (addons-015389) DBG |     <dhcp>
	I0318 12:16:55.234656 1115022 main.go:141] libmachine: (addons-015389) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 12:16:55.234665 1115022 main.go:141] libmachine: (addons-015389) DBG |     </dhcp>
	I0318 12:16:55.234669 1115022 main.go:141] libmachine: (addons-015389) DBG |   </ip>
	I0318 12:16:55.234715 1115022 main.go:141] libmachine: (addons-015389) DBG |   
	I0318 12:16:55.234740 1115022 main.go:141] libmachine: (addons-015389) DBG | </network>
	I0318 12:16:55.234750 1115022 main.go:141] libmachine: (addons-015389) DBG | 
	I0318 12:16:55.239906 1115022 main.go:141] libmachine: (addons-015389) DBG | trying to create private KVM network mk-addons-015389 192.168.39.0/24...
	I0318 12:16:55.307186 1115022 main.go:141] libmachine: (addons-015389) DBG | private KVM network mk-addons-015389 192.168.39.0/24 created
	I0318 12:16:55.307232 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:55.307125 1115045 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:16:55.307249 1115022 main.go:141] libmachine: (addons-015389) Setting up store path in /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389 ...
	I0318 12:16:55.307266 1115022 main.go:141] libmachine: (addons-015389) Building disk image from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 12:16:55.307339 1115022 main.go:141] libmachine: (addons-015389) Downloading /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:16:55.573737 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:55.573608 1115045 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa...
	I0318 12:16:55.853337 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:55.853194 1115045 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/addons-015389.rawdisk...
	I0318 12:16:55.853373 1115022 main.go:141] libmachine: (addons-015389) DBG | Writing magic tar header
	I0318 12:16:55.853384 1115022 main.go:141] libmachine: (addons-015389) DBG | Writing SSH key tar header
	I0318 12:16:55.853395 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:55.853323 1115045 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389 ...
	I0318 12:16:55.853484 1115022 main.go:141] libmachine: (addons-015389) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389
	I0318 12:16:55.853505 1115022 main.go:141] libmachine: (addons-015389) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines
	I0318 12:16:55.853540 1115022 main.go:141] libmachine: (addons-015389) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389 (perms=drwx------)
	I0318 12:16:55.853549 1115022 main.go:141] libmachine: (addons-015389) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:16:55.853559 1115022 main.go:141] libmachine: (addons-015389) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816
	I0318 12:16:55.853568 1115022 main.go:141] libmachine: (addons-015389) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 12:16:55.853577 1115022 main.go:141] libmachine: (addons-015389) DBG | Checking permissions on dir: /home/jenkins
	I0318 12:16:55.853592 1115022 main.go:141] libmachine: (addons-015389) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines (perms=drwxr-xr-x)
	I0318 12:16:55.853603 1115022 main.go:141] libmachine: (addons-015389) DBG | Checking permissions on dir: /home
	I0318 12:16:55.853615 1115022 main.go:141] libmachine: (addons-015389) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube (perms=drwxr-xr-x)
	I0318 12:16:55.853633 1115022 main.go:141] libmachine: (addons-015389) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816 (perms=drwxrwxr-x)
	I0318 12:16:55.853642 1115022 main.go:141] libmachine: (addons-015389) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 12:16:55.853649 1115022 main.go:141] libmachine: (addons-015389) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 12:16:55.853656 1115022 main.go:141] libmachine: (addons-015389) Creating domain...
	I0318 12:16:55.853668 1115022 main.go:141] libmachine: (addons-015389) DBG | Skipping /home - not owner
	I0318 12:16:55.854791 1115022 main.go:141] libmachine: (addons-015389) define libvirt domain using xml: 
	I0318 12:16:55.854825 1115022 main.go:141] libmachine: (addons-015389) <domain type='kvm'>
	I0318 12:16:55.854834 1115022 main.go:141] libmachine: (addons-015389)   <name>addons-015389</name>
	I0318 12:16:55.854839 1115022 main.go:141] libmachine: (addons-015389)   <memory unit='MiB'>4000</memory>
	I0318 12:16:55.854848 1115022 main.go:141] libmachine: (addons-015389)   <vcpu>2</vcpu>
	I0318 12:16:55.854863 1115022 main.go:141] libmachine: (addons-015389)   <features>
	I0318 12:16:55.854876 1115022 main.go:141] libmachine: (addons-015389)     <acpi/>
	I0318 12:16:55.854883 1115022 main.go:141] libmachine: (addons-015389)     <apic/>
	I0318 12:16:55.854891 1115022 main.go:141] libmachine: (addons-015389)     <pae/>
	I0318 12:16:55.854897 1115022 main.go:141] libmachine: (addons-015389)     
	I0318 12:16:55.854906 1115022 main.go:141] libmachine: (addons-015389)   </features>
	I0318 12:16:55.854911 1115022 main.go:141] libmachine: (addons-015389)   <cpu mode='host-passthrough'>
	I0318 12:16:55.854919 1115022 main.go:141] libmachine: (addons-015389)   
	I0318 12:16:55.854925 1115022 main.go:141] libmachine: (addons-015389)   </cpu>
	I0318 12:16:55.854934 1115022 main.go:141] libmachine: (addons-015389)   <os>
	I0318 12:16:55.854941 1115022 main.go:141] libmachine: (addons-015389)     <type>hvm</type>
	I0318 12:16:55.854953 1115022 main.go:141] libmachine: (addons-015389)     <boot dev='cdrom'/>
	I0318 12:16:55.854961 1115022 main.go:141] libmachine: (addons-015389)     <boot dev='hd'/>
	I0318 12:16:55.854974 1115022 main.go:141] libmachine: (addons-015389)     <bootmenu enable='no'/>
	I0318 12:16:55.854984 1115022 main.go:141] libmachine: (addons-015389)   </os>
	I0318 12:16:55.855002 1115022 main.go:141] libmachine: (addons-015389)   <devices>
	I0318 12:16:55.855016 1115022 main.go:141] libmachine: (addons-015389)     <disk type='file' device='cdrom'>
	I0318 12:16:55.855047 1115022 main.go:141] libmachine: (addons-015389)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/boot2docker.iso'/>
	I0318 12:16:55.855070 1115022 main.go:141] libmachine: (addons-015389)       <target dev='hdc' bus='scsi'/>
	I0318 12:16:55.855082 1115022 main.go:141] libmachine: (addons-015389)       <readonly/>
	I0318 12:16:55.855097 1115022 main.go:141] libmachine: (addons-015389)     </disk>
	I0318 12:16:55.855111 1115022 main.go:141] libmachine: (addons-015389)     <disk type='file' device='disk'>
	I0318 12:16:55.855122 1115022 main.go:141] libmachine: (addons-015389)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 12:16:55.855133 1115022 main.go:141] libmachine: (addons-015389)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/addons-015389.rawdisk'/>
	I0318 12:16:55.855138 1115022 main.go:141] libmachine: (addons-015389)       <target dev='hda' bus='virtio'/>
	I0318 12:16:55.855144 1115022 main.go:141] libmachine: (addons-015389)     </disk>
	I0318 12:16:55.855149 1115022 main.go:141] libmachine: (addons-015389)     <interface type='network'>
	I0318 12:16:55.855174 1115022 main.go:141] libmachine: (addons-015389)       <source network='mk-addons-015389'/>
	I0318 12:16:55.855190 1115022 main.go:141] libmachine: (addons-015389)       <model type='virtio'/>
	I0318 12:16:55.855202 1115022 main.go:141] libmachine: (addons-015389)     </interface>
	I0318 12:16:55.855213 1115022 main.go:141] libmachine: (addons-015389)     <interface type='network'>
	I0318 12:16:55.855222 1115022 main.go:141] libmachine: (addons-015389)       <source network='default'/>
	I0318 12:16:55.855229 1115022 main.go:141] libmachine: (addons-015389)       <model type='virtio'/>
	I0318 12:16:55.855234 1115022 main.go:141] libmachine: (addons-015389)     </interface>
	I0318 12:16:55.855241 1115022 main.go:141] libmachine: (addons-015389)     <serial type='pty'>
	I0318 12:16:55.855249 1115022 main.go:141] libmachine: (addons-015389)       <target port='0'/>
	I0318 12:16:55.855259 1115022 main.go:141] libmachine: (addons-015389)     </serial>
	I0318 12:16:55.855297 1115022 main.go:141] libmachine: (addons-015389)     <console type='pty'>
	I0318 12:16:55.855319 1115022 main.go:141] libmachine: (addons-015389)       <target type='serial' port='0'/>
	I0318 12:16:55.855326 1115022 main.go:141] libmachine: (addons-015389)     </console>
	I0318 12:16:55.855331 1115022 main.go:141] libmachine: (addons-015389)     <rng model='virtio'>
	I0318 12:16:55.855359 1115022 main.go:141] libmachine: (addons-015389)       <backend model='random'>/dev/random</backend>
	I0318 12:16:55.855383 1115022 main.go:141] libmachine: (addons-015389)     </rng>
	I0318 12:16:55.855393 1115022 main.go:141] libmachine: (addons-015389)     
	I0318 12:16:55.855399 1115022 main.go:141] libmachine: (addons-015389)     
	I0318 12:16:55.855411 1115022 main.go:141] libmachine: (addons-015389)   </devices>
	I0318 12:16:55.855419 1115022 main.go:141] libmachine: (addons-015389) </domain>
	I0318 12:16:55.855433 1115022 main.go:141] libmachine: (addons-015389) 
	I0318 12:16:55.860121 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:ba:56:41 in network default
	I0318 12:16:55.860713 1115022 main.go:141] libmachine: (addons-015389) Ensuring networks are active...
	I0318 12:16:55.860736 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:16:55.861493 1115022 main.go:141] libmachine: (addons-015389) Ensuring network default is active
	I0318 12:16:55.861736 1115022 main.go:141] libmachine: (addons-015389) Ensuring network mk-addons-015389 is active
	I0318 12:16:55.862232 1115022 main.go:141] libmachine: (addons-015389) Getting domain xml...
	I0318 12:16:55.862906 1115022 main.go:141] libmachine: (addons-015389) Creating domain...
	I0318 12:16:57.044568 1115022 main.go:141] libmachine: (addons-015389) Waiting to get IP...
	I0318 12:16:57.045357 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:16:57.045736 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:16:57.045782 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:57.045732 1115045 retry.go:31] will retry after 200.954418ms: waiting for machine to come up
	I0318 12:16:57.248067 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:16:57.248501 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:16:57.248523 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:57.248478 1115045 retry.go:31] will retry after 359.974723ms: waiting for machine to come up
	I0318 12:16:57.610050 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:16:57.610625 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:16:57.610663 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:57.610571 1115045 retry.go:31] will retry after 350.115125ms: waiting for machine to come up
	I0318 12:16:57.962078 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:16:57.962495 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:16:57.962518 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:57.962474 1115045 retry.go:31] will retry after 371.237432ms: waiting for machine to come up
	I0318 12:16:58.334990 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:16:58.335507 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:16:58.335543 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:58.335446 1115045 retry.go:31] will retry after 724.628299ms: waiting for machine to come up
	I0318 12:16:59.061320 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:16:59.061772 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:16:59.061802 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:59.061722 1115045 retry.go:31] will retry after 709.45497ms: waiting for machine to come up
	I0318 12:16:59.772537 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:16:59.772992 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:16:59.773020 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:16:59.772963 1115045 retry.go:31] will retry after 1.090124434s: waiting for machine to come up
	I0318 12:17:00.864601 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:00.865009 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:17:00.865035 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:17:00.864960 1115045 retry.go:31] will retry after 1.472783543s: waiting for machine to come up
	I0318 12:17:02.339545 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:02.339983 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:17:02.340015 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:17:02.339926 1115045 retry.go:31] will retry after 1.275664108s: waiting for machine to come up
	I0318 12:17:03.617455 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:03.617883 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:17:03.617909 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:17:03.617833 1115045 retry.go:31] will retry after 1.506675804s: waiting for machine to come up
	I0318 12:17:05.126374 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:05.126751 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:17:05.126779 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:17:05.126694 1115045 retry.go:31] will retry after 1.804010782s: waiting for machine to come up
	I0318 12:17:06.932985 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:06.933513 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:17:06.933575 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:17:06.933493 1115045 retry.go:31] will retry after 3.459711001s: waiting for machine to come up
	I0318 12:17:10.394445 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:10.394884 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:17:10.394910 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:17:10.394822 1115045 retry.go:31] will retry after 4.37852829s: waiting for machine to come up
	I0318 12:17:14.778401 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:14.778851 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find current IP address of domain addons-015389 in network mk-addons-015389
	I0318 12:17:14.778874 1115022 main.go:141] libmachine: (addons-015389) DBG | I0318 12:17:14.778795 1115045 retry.go:31] will retry after 5.174122585s: waiting for machine to come up
	I0318 12:17:19.954166 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:19.954669 1115022 main.go:141] libmachine: (addons-015389) Found IP for machine: 192.168.39.94
	I0318 12:17:19.954714 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has current primary IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:19.954725 1115022 main.go:141] libmachine: (addons-015389) Reserving static IP address...
	I0318 12:17:19.955090 1115022 main.go:141] libmachine: (addons-015389) DBG | unable to find host DHCP lease matching {name: "addons-015389", mac: "52:54:00:d6:99:5d", ip: "192.168.39.94"} in network mk-addons-015389
	I0318 12:17:20.026320 1115022 main.go:141] libmachine: (addons-015389) DBG | Getting to WaitForSSH function...
	I0318 12:17:20.026373 1115022 main.go:141] libmachine: (addons-015389) Reserved static IP address: 192.168.39.94
	I0318 12:17:20.026388 1115022 main.go:141] libmachine: (addons-015389) Waiting for SSH to be available...
	I0318 12:17:20.028962 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.029319 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:minikube Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:20.029360 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.029477 1115022 main.go:141] libmachine: (addons-015389) DBG | Using SSH client type: external
	I0318 12:17:20.029523 1115022 main.go:141] libmachine: (addons-015389) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa (-rw-------)
	I0318 12:17:20.029559 1115022 main.go:141] libmachine: (addons-015389) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.94 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:17:20.029576 1115022 main.go:141] libmachine: (addons-015389) DBG | About to run SSH command:
	I0318 12:17:20.029588 1115022 main.go:141] libmachine: (addons-015389) DBG | exit 0
	I0318 12:17:20.156291 1115022 main.go:141] libmachine: (addons-015389) DBG | SSH cmd err, output: <nil>: 
	I0318 12:17:20.156520 1115022 main.go:141] libmachine: (addons-015389) KVM machine creation complete!
	I0318 12:17:20.156941 1115022 main.go:141] libmachine: (addons-015389) Calling .GetConfigRaw
	I0318 12:17:20.157606 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:20.157840 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:20.157997 1115022 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 12:17:20.158014 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:20.159220 1115022 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 12:17:20.159233 1115022 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 12:17:20.159239 1115022 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 12:17:20.159245 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:20.161614 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.161964 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:20.161999 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.162159 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:20.162321 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:20.162477 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:20.162642 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:20.162814 1115022 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:20.163058 1115022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0318 12:17:20.163075 1115022 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 12:17:20.267858 1115022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:17:20.267885 1115022 main.go:141] libmachine: Detecting the provisioner...
	I0318 12:17:20.267896 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:20.270777 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.271109 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:20.271151 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.271288 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:20.271505 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:20.271680 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:20.271851 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:20.272054 1115022 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:20.272258 1115022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0318 12:17:20.272271 1115022 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 12:17:20.377239 1115022 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 12:17:20.377376 1115022 main.go:141] libmachine: found compatible host: buildroot
	I0318 12:17:20.377389 1115022 main.go:141] libmachine: Provisioning with buildroot...
	I0318 12:17:20.377397 1115022 main.go:141] libmachine: (addons-015389) Calling .GetMachineName
	I0318 12:17:20.377655 1115022 buildroot.go:166] provisioning hostname "addons-015389"
	I0318 12:17:20.377689 1115022 main.go:141] libmachine: (addons-015389) Calling .GetMachineName
	I0318 12:17:20.377920 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:20.380617 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.380995 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:20.381037 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.381128 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:20.381314 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:20.381471 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:20.381585 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:20.381735 1115022 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:20.381920 1115022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0318 12:17:20.381933 1115022 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-015389 && echo "addons-015389" | sudo tee /etc/hostname
	I0318 12:17:20.503993 1115022 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-015389
	
	I0318 12:17:20.504034 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:20.506726 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.507059 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:20.507094 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.507283 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:20.507454 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:20.507561 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:20.507635 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:20.507734 1115022 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:20.507907 1115022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0318 12:17:20.507924 1115022 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-015389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-015389/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-015389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:17:20.622989 1115022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:17:20.623026 1115022 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 12:17:20.623064 1115022 buildroot.go:174] setting up certificates
	I0318 12:17:20.623077 1115022 provision.go:84] configureAuth start
	I0318 12:17:20.623090 1115022 main.go:141] libmachine: (addons-015389) Calling .GetMachineName
	I0318 12:17:20.623389 1115022 main.go:141] libmachine: (addons-015389) Calling .GetIP
	I0318 12:17:20.626035 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.626403 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:20.626456 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.626572 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:20.628894 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.629220 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:20.629247 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.629342 1115022 provision.go:143] copyHostCerts
	I0318 12:17:20.629416 1115022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 12:17:20.629545 1115022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 12:17:20.629635 1115022 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 12:17:20.629697 1115022 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.addons-015389 san=[127.0.0.1 192.168.39.94 addons-015389 localhost minikube]
	I0318 12:17:20.732957 1115022 provision.go:177] copyRemoteCerts
	I0318 12:17:20.733030 1115022 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:17:20.733055 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:20.735598 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.735921 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:20.735968 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.736101 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:20.736314 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:20.736509 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:20.736679 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:20.819710 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:17:20.846627 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 12:17:20.872943 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 12:17:20.898585 1115022 provision.go:87] duration metric: took 275.493533ms to configureAuth
	I0318 12:17:20.898616 1115022 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:17:20.898857 1115022 config.go:182] Loaded profile config "addons-015389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:17:20.898995 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:20.901562 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.901909 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:20.901932 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:20.902121 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:20.902315 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:20.902481 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:20.902591 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:20.902740 1115022 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:20.902925 1115022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0318 12:17:20.902941 1115022 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 12:17:21.176537 1115022 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 12:17:21.176568 1115022 main.go:141] libmachine: Checking connection to Docker...
	I0318 12:17:21.176579 1115022 main.go:141] libmachine: (addons-015389) Calling .GetURL
	I0318 12:17:21.177831 1115022 main.go:141] libmachine: (addons-015389) DBG | Using libvirt version 6000000
	I0318 12:17:21.179920 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.180247 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:21.180275 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.180494 1115022 main.go:141] libmachine: Docker is up and running!
	I0318 12:17:21.180521 1115022 main.go:141] libmachine: Reticulating splines...
	I0318 12:17:21.180531 1115022 client.go:171] duration metric: took 26.362648892s to LocalClient.Create
	I0318 12:17:21.180567 1115022 start.go:167] duration metric: took 26.362735552s to libmachine.API.Create "addons-015389"
	I0318 12:17:21.180591 1115022 start.go:293] postStartSetup for "addons-015389" (driver="kvm2")
	I0318 12:17:21.180609 1115022 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:17:21.180631 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:21.180859 1115022 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:17:21.180889 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:21.182716 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.183026 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:21.183052 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.183199 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:21.183400 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:21.183543 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:21.183681 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:21.268121 1115022 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:17:21.273003 1115022 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:17:21.273034 1115022 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 12:17:21.273142 1115022 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 12:17:21.273169 1115022 start.go:296] duration metric: took 92.568023ms for postStartSetup
	I0318 12:17:21.273205 1115022 main.go:141] libmachine: (addons-015389) Calling .GetConfigRaw
	I0318 12:17:21.273803 1115022 main.go:141] libmachine: (addons-015389) Calling .GetIP
	I0318 12:17:21.276351 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.276663 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:21.276692 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.276926 1115022 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/config.json ...
	I0318 12:17:21.277093 1115022 start.go:128] duration metric: took 26.477311798s to createHost
	I0318 12:17:21.277114 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:21.279350 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.279653 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:21.279678 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.279809 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:21.279971 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:21.280112 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:21.280235 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:21.280389 1115022 main.go:141] libmachine: Using SSH client type: native
	I0318 12:17:21.280565 1115022 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.94 22 <nil> <nil>}
	I0318 12:17:21.280577 1115022 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:17:21.389549 1115022 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710764241.373495203
	
	I0318 12:17:21.389578 1115022 fix.go:216] guest clock: 1710764241.373495203
	I0318 12:17:21.389586 1115022 fix.go:229] Guest: 2024-03-18 12:17:21.373495203 +0000 UTC Remote: 2024-03-18 12:17:21.277103936 +0000 UTC m=+26.593786664 (delta=96.391267ms)
	I0318 12:17:21.389613 1115022 fix.go:200] guest clock delta is within tolerance: 96.391267ms
	I0318 12:17:21.389621 1115022 start.go:83] releasing machines lock for "addons-015389", held for 26.589918536s
	I0318 12:17:21.389650 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:21.389942 1115022 main.go:141] libmachine: (addons-015389) Calling .GetIP
	I0318 12:17:21.392641 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.392970 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:21.392992 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.393169 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:21.393677 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:21.393906 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:21.394087 1115022 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:17:21.394154 1115022 ssh_runner.go:195] Run: cat /version.json
	I0318 12:17:21.394168 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:21.394182 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:21.396762 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.397109 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.397154 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:21.397177 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.397364 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:21.397587 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:21.397628 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:21.397657 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:21.397760 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:21.397791 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:21.397925 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:21.397987 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:21.398112 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:21.398286 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:21.508890 1115022 ssh_runner.go:195] Run: systemctl --version
	I0318 12:17:21.514991 1115022 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 12:17:21.679265 1115022 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 12:17:21.685925 1115022 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:17:21.685991 1115022 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:17:21.703305 1115022 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:17:21.703330 1115022 start.go:494] detecting cgroup driver to use...
	I0318 12:17:21.703420 1115022 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:17:21.720874 1115022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:17:21.735737 1115022 docker.go:217] disabling cri-docker service (if available) ...
	I0318 12:17:21.735793 1115022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 12:17:21.750810 1115022 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 12:17:21.765785 1115022 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 12:17:21.880644 1115022 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 12:17:22.019064 1115022 docker.go:233] disabling docker service ...
	I0318 12:17:22.019155 1115022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 12:17:22.035529 1115022 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 12:17:22.049892 1115022 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 12:17:22.186277 1115022 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 12:17:22.300971 1115022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 12:17:22.317623 1115022 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:17:22.338074 1115022 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 12:17:22.338153 1115022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:17:22.349408 1115022 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 12:17:22.349485 1115022 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:17:22.360606 1115022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:17:22.371586 1115022 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:17:22.382656 1115022 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:17:22.393950 1115022 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:17:22.404039 1115022 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 12:17:22.404088 1115022 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 12:17:22.418608 1115022 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:17:22.429212 1115022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:17:22.554131 1115022 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 12:17:22.711846 1115022 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 12:17:22.711953 1115022 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 12:17:22.717697 1115022 start.go:562] Will wait 60s for crictl version
	I0318 12:17:22.717759 1115022 ssh_runner.go:195] Run: which crictl
	I0318 12:17:22.722264 1115022 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:17:22.758812 1115022 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 12:17:22.758914 1115022 ssh_runner.go:195] Run: crio --version
	I0318 12:17:22.790147 1115022 ssh_runner.go:195] Run: crio --version
	I0318 12:17:22.823278 1115022 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 12:17:22.824608 1115022 main.go:141] libmachine: (addons-015389) Calling .GetIP
	I0318 12:17:22.827228 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:22.827583 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:22.827613 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:22.827773 1115022 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 12:17:22.832438 1115022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:17:22.846285 1115022 kubeadm.go:877] updating cluster {Name:addons-015389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-015389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 12:17:22.846430 1115022 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:17:22.846500 1115022 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:17:22.879677 1115022 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 12:17:22.879761 1115022 ssh_runner.go:195] Run: which lz4
	I0318 12:17:22.884072 1115022 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 12:17:22.888659 1115022 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 12:17:22.888694 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 12:17:24.694807 1115022 crio.go:444] duration metric: took 1.810779362s to copy over tarball
	I0318 12:17:24.694912 1115022 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 12:17:27.580551 1115022 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.885597093s)
	I0318 12:17:27.580595 1115022 crio.go:451] duration metric: took 2.885754221s to extract the tarball
	I0318 12:17:27.580606 1115022 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 12:17:27.624146 1115022 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:17:27.673757 1115022 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 12:17:27.673790 1115022 cache_images.go:84] Images are preloaded, skipping loading
	I0318 12:17:27.673800 1115022 kubeadm.go:928] updating node { 192.168.39.94 8443 v1.28.4 crio true true} ...
	I0318 12:17:27.673934 1115022 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-015389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.94
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-015389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:17:27.674014 1115022 ssh_runner.go:195] Run: crio config
	I0318 12:17:27.723833 1115022 cni.go:84] Creating CNI manager for ""
	I0318 12:17:27.723861 1115022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:17:27.723875 1115022 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 12:17:27.723899 1115022 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.94 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-015389 NodeName:addons-015389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.94"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.94 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 12:17:27.724060 1115022 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.94
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-015389"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.94
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.94"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 12:17:27.724146 1115022 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:17:27.735204 1115022 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 12:17:27.735288 1115022 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 12:17:27.745386 1115022 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0318 12:17:27.764174 1115022 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:17:27.782571 1115022 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0318 12:17:27.801250 1115022 ssh_runner.go:195] Run: grep 192.168.39.94	control-plane.minikube.internal$ /etc/hosts
	I0318 12:17:27.805856 1115022 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.94	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:17:27.820400 1115022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:17:27.942192 1115022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:17:27.961979 1115022 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389 for IP: 192.168.39.94
	I0318 12:17:27.962016 1115022 certs.go:194] generating shared ca certs ...
	I0318 12:17:27.962042 1115022 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:27.962254 1115022 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 12:17:28.070248 1115022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt ...
	I0318 12:17:28.070282 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt: {Name:mk86cc31fe20071d3682ba7ac8e36bf4e8f3fb68 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:28.070484 1115022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key ...
	I0318 12:17:28.070499 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key: {Name:mk7d16421ac41d4d1e6dd5fd3a553cbe4596164f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:28.070602 1115022 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 12:17:28.307768 1115022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt ...
	I0318 12:17:28.307810 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt: {Name:mk50c288e3a33a778f58db963d7eaec006f02487 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:28.308019 1115022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key ...
	I0318 12:17:28.308038 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key: {Name:mkd3b7d4740a6f8c7cc4d9fc8dfef982809e6d07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:28.308139 1115022 certs.go:256] generating profile certs ...
	I0318 12:17:28.308217 1115022 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.key
	I0318 12:17:28.308235 1115022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt with IP's: []
	I0318 12:17:28.425252 1115022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt ...
	I0318 12:17:28.425285 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: {Name:mk6807de58fdf87a55e674b7434eba1241bbe356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:28.425480 1115022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.key ...
	I0318 12:17:28.425496 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.key: {Name:mk9aa05001afae039d29734baaffb451d9b9947c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:28.425589 1115022 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.key.6b784837
	I0318 12:17:28.425615 1115022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.crt.6b784837 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.94]
	I0318 12:17:28.723899 1115022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.crt.6b784837 ...
	I0318 12:17:28.723939 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.crt.6b784837: {Name:mke98feea95688007235fe0b1e9ee91a8a7d4181 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:28.724113 1115022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.key.6b784837 ...
	I0318 12:17:28.724129 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.key.6b784837: {Name:mkdc407f87a58356f8ecd5eb95912585ef13d0f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:28.724202 1115022 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.crt.6b784837 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.crt
	I0318 12:17:28.724275 1115022 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.key.6b784837 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.key
	I0318 12:17:28.724321 1115022 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/proxy-client.key
	I0318 12:17:28.724359 1115022 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/proxy-client.crt with IP's: []
	I0318 12:17:28.998142 1115022 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/proxy-client.crt ...
	I0318 12:17:28.998184 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/proxy-client.crt: {Name:mkbb6bb068e37960e7d11e8f6d4087c836b8d937 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:28.998358 1115022 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/proxy-client.key ...
	I0318 12:17:28.998371 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/proxy-client.key: {Name:mk6e9175561bbebafc5670f154398e56be852586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:28.998611 1115022 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 12:17:28.998652 1115022 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 12:17:28.998679 1115022 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 12:17:28.998704 1115022 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 12:17:28.999393 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:17:29.028438 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:17:29.055315 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:17:29.082852 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:17:29.110780 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0318 12:17:29.138116 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 12:17:29.165158 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:17:29.193521 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 12:17:29.219945 1115022 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:17:29.247261 1115022 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 12:17:29.266024 1115022 ssh_runner.go:195] Run: openssl version
	I0318 12:17:29.272412 1115022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:17:29.283918 1115022 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:17:29.289169 1115022 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:17:29.289216 1115022 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:17:29.295748 1115022 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:17:29.307697 1115022 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:17:29.312359 1115022 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:17:29.312414 1115022 kubeadm.go:391] StartCluster: {Name:addons-015389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-015389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:17:29.312497 1115022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 12:17:29.312567 1115022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 12:17:29.356980 1115022 cri.go:89] found id: ""
	I0318 12:17:29.357052 1115022 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 12:17:29.367787 1115022 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 12:17:29.377935 1115022 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 12:17:29.387832 1115022 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:17:29.387849 1115022 kubeadm.go:156] found existing configuration files:
	
	I0318 12:17:29.387898 1115022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 12:17:29.397224 1115022 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:17:29.397291 1115022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 12:17:29.407525 1115022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 12:17:29.417167 1115022 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:17:29.417235 1115022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 12:17:29.427280 1115022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 12:17:29.436813 1115022 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:17:29.436881 1115022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 12:17:29.446715 1115022 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 12:17:29.456182 1115022 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:17:29.456232 1115022 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 12:17:29.466225 1115022 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 12:17:29.518295 1115022 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 12:17:29.518458 1115022 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 12:17:29.663359 1115022 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 12:17:29.663520 1115022 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 12:17:29.663647 1115022 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 12:17:29.911731 1115022 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 12:17:29.971659 1115022 out.go:204]   - Generating certificates and keys ...
	I0318 12:17:29.971794 1115022 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 12:17:29.971884 1115022 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 12:17:30.146010 1115022 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 12:17:30.377615 1115022 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 12:17:30.506922 1115022 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 12:17:30.589258 1115022 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 12:17:30.757481 1115022 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 12:17:30.757623 1115022 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-015389 localhost] and IPs [192.168.39.94 127.0.0.1 ::1]
	I0318 12:17:30.873214 1115022 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 12:17:30.873559 1115022 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-015389 localhost] and IPs [192.168.39.94 127.0.0.1 ::1]
	I0318 12:17:30.982456 1115022 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 12:17:31.133140 1115022 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 12:17:31.298281 1115022 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 12:17:31.298498 1115022 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 12:17:31.459647 1115022 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 12:17:31.715054 1115022 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 12:17:31.813495 1115022 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 12:17:32.102996 1115022 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 12:17:32.103605 1115022 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 12:17:32.105846 1115022 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 12:17:32.107852 1115022 out.go:204]   - Booting up control plane ...
	I0318 12:17:32.107952 1115022 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 12:17:32.108073 1115022 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 12:17:32.108160 1115022 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 12:17:32.123559 1115022 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:17:32.124385 1115022 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:17:32.124441 1115022 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 12:17:32.249266 1115022 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 12:17:37.751620 1115022 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502830 seconds
	I0318 12:17:37.751767 1115022 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 12:17:37.764222 1115022 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 12:17:38.293141 1115022 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 12:17:38.293412 1115022 kubeadm.go:309] [mark-control-plane] Marking the node addons-015389 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 12:17:38.808609 1115022 kubeadm.go:309] [bootstrap-token] Using token: 6ziw26.8lbskvfwkh2076ab
	I0318 12:17:38.810162 1115022 out.go:204]   - Configuring RBAC rules ...
	I0318 12:17:38.810262 1115022 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 12:17:38.815344 1115022 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 12:17:38.824368 1115022 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 12:17:38.827933 1115022 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 12:17:38.833621 1115022 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 12:17:38.837189 1115022 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 12:17:38.852444 1115022 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 12:17:39.116550 1115022 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 12:17:39.222982 1115022 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 12:17:39.223677 1115022 kubeadm.go:309] 
	I0318 12:17:39.223762 1115022 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 12:17:39.223802 1115022 kubeadm.go:309] 
	I0318 12:17:39.223923 1115022 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 12:17:39.223936 1115022 kubeadm.go:309] 
	I0318 12:17:39.223974 1115022 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 12:17:39.224056 1115022 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 12:17:39.224131 1115022 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 12:17:39.224148 1115022 kubeadm.go:309] 
	I0318 12:17:39.224218 1115022 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 12:17:39.224231 1115022 kubeadm.go:309] 
	I0318 12:17:39.224307 1115022 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 12:17:39.224315 1115022 kubeadm.go:309] 
	I0318 12:17:39.224395 1115022 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 12:17:39.224467 1115022 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 12:17:39.224525 1115022 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 12:17:39.224532 1115022 kubeadm.go:309] 
	I0318 12:17:39.224613 1115022 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 12:17:39.224680 1115022 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 12:17:39.224687 1115022 kubeadm.go:309] 
	I0318 12:17:39.224754 1115022 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 6ziw26.8lbskvfwkh2076ab \
	I0318 12:17:39.224849 1115022 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 12:17:39.224870 1115022 kubeadm.go:309] 	--control-plane 
	I0318 12:17:39.224874 1115022 kubeadm.go:309] 
	I0318 12:17:39.224987 1115022 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 12:17:39.225017 1115022 kubeadm.go:309] 
	I0318 12:17:39.225170 1115022 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 6ziw26.8lbskvfwkh2076ab \
	I0318 12:17:39.225335 1115022 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 12:17:39.226035 1115022 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 12:17:39.226059 1115022 cni.go:84] Creating CNI manager for ""
	I0318 12:17:39.226067 1115022 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:17:39.227420 1115022 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 12:17:39.228898 1115022 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 12:17:39.259191 1115022 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 12:17:39.318698 1115022 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 12:17:39.318806 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-015389 minikube.k8s.io/updated_at=2024_03_18T12_17_39_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=addons-015389 minikube.k8s.io/primary=true
	I0318 12:17:39.318817 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:39.545140 1115022 ops.go:34] apiserver oom_adj: -16
	I0318 12:17:39.545283 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:40.046205 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:40.545600 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:41.046181 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:41.545787 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:42.045874 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:42.545411 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:43.045613 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:43.546060 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:44.045935 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:44.545506 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:45.045957 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:45.545918 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:46.045341 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:46.545458 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:47.046259 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:47.546152 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:48.046055 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:48.545706 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:49.045345 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:49.545905 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:50.046008 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:50.546259 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:51.046213 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:51.546355 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:52.046375 1115022 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:17:52.153720 1115022 kubeadm.go:1107] duration metric: took 12.834998243s to wait for elevateKubeSystemPrivileges
	W0318 12:17:52.153763 1115022 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 12:17:52.153772 1115022 kubeadm.go:393] duration metric: took 22.841363603s to StartCluster
	I0318 12:17:52.153797 1115022 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:52.153954 1115022 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:17:52.154365 1115022 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:17:52.154597 1115022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 12:17:52.154617 1115022 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.94 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:17:52.156438 1115022 out.go:177] * Verifying Kubernetes components...
	I0318 12:17:52.154689 1115022 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0318 12:17:52.154868 1115022 config.go:182] Loaded profile config "addons-015389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:17:52.157632 1115022 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:17:52.157683 1115022 addons.go:69] Setting yakd=true in profile "addons-015389"
	I0318 12:17:52.157728 1115022 addons.go:234] Setting addon yakd=true in "addons-015389"
	I0318 12:17:52.157743 1115022 addons.go:69] Setting inspektor-gadget=true in profile "addons-015389"
	I0318 12:17:52.157748 1115022 addons.go:69] Setting default-storageclass=true in profile "addons-015389"
	I0318 12:17:52.157756 1115022 addons.go:69] Setting helm-tiller=true in profile "addons-015389"
	I0318 12:17:52.157769 1115022 addons.go:234] Setting addon inspektor-gadget=true in "addons-015389"
	I0318 12:17:52.157774 1115022 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-015389"
	I0318 12:17:52.157779 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.157782 1115022 addons.go:69] Setting ingress=true in profile "addons-015389"
	I0318 12:17:52.157802 1115022 addons.go:234] Setting addon ingress=true in "addons-015389"
	I0318 12:17:52.157812 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.157837 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.157846 1115022 addons.go:69] Setting cloud-spanner=true in profile "addons-015389"
	I0318 12:17:52.157775 1115022 addons.go:234] Setting addon helm-tiller=true in "addons-015389"
	I0318 12:17:52.157919 1115022 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-015389"
	I0318 12:17:52.157932 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.157953 1115022 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-015389"
	I0318 12:17:52.158241 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.158256 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.158281 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.158295 1115022 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-015389"
	I0318 12:17:52.158349 1115022 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-015389"
	I0318 12:17:52.158354 1115022 addons.go:69] Setting storage-provisioner=true in profile "addons-015389"
	I0318 12:17:52.158379 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.158395 1115022 addons.go:234] Setting addon storage-provisioner=true in "addons-015389"
	I0318 12:17:52.158395 1115022 addons.go:69] Setting metrics-server=true in profile "addons-015389"
	I0318 12:17:52.158420 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.158425 1115022 addons.go:234] Setting addon metrics-server=true in "addons-015389"
	I0318 12:17:52.158449 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.158478 1115022 addons.go:69] Setting volumesnapshots=true in profile "addons-015389"
	I0318 12:17:52.157733 1115022 addons.go:69] Setting registry=true in profile "addons-015389"
	I0318 12:17:52.158501 1115022 addons.go:234] Setting addon volumesnapshots=true in "addons-015389"
	I0318 12:17:52.158524 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.158529 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.158530 1115022 addons.go:234] Setting addon registry=true in "addons-015389"
	I0318 12:17:52.158565 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.158574 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.158623 1115022 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-015389"
	I0318 12:17:52.158644 1115022 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-015389"
	I0318 12:17:52.158697 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.158723 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.157744 1115022 addons.go:69] Setting gcp-auth=true in profile "addons-015389"
	I0318 12:17:52.158775 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.158789 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.158242 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.158796 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.158819 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.158823 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.158282 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.158856 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.158789 1115022 mustload.go:65] Loading cluster: addons-015389
	I0318 12:17:52.158282 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.157906 1115022 addons.go:234] Setting addon cloud-spanner=true in "addons-015389"
	I0318 12:17:52.158887 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.158903 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.157733 1115022 addons.go:69] Setting ingress-dns=true in profile "addons-015389"
	I0318 12:17:52.158934 1115022 addons.go:234] Setting addon ingress-dns=true in "addons-015389"
	I0318 12:17:52.159019 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.159047 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.159048 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.159076 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.159074 1115022 config.go:182] Loaded profile config "addons-015389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:17:52.159100 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.159139 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.159154 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.179601 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43005
	I0318 12:17:52.179694 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38803
	I0318 12:17:52.179747 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38441
	I0318 12:17:52.180353 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.180480 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39545
	I0318 12:17:52.180574 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.180980 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.181001 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.181114 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.181171 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.181188 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.181514 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.181651 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.181654 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.181709 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.182102 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.182179 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.182235 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38869
	I0318 12:17:52.182253 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.182291 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.182131 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.182456 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.182605 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.182817 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.183003 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.183020 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.183419 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.183436 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.183506 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.183765 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.184193 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.184238 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.184606 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.184652 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.184658 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.184691 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.185399 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.185435 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.195722 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37091
	I0318 12:17:52.195798 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.195835 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.196412 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.196512 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.196554 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.197049 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.197070 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.197460 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.197822 1115022 addons.go:234] Setting addon default-storageclass=true in "addons-015389"
	I0318 12:17:52.197878 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.198070 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.198100 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.198235 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.198266 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.204644 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39569
	I0318 12:17:52.205281 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.205912 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.205932 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.206506 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.206900 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.209915 1115022 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-015389"
	I0318 12:17:52.209967 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.210339 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.210382 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.212634 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42401
	I0318 12:17:52.213162 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.213748 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.213765 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.214149 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.214693 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.214734 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.214945 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0318 12:17:52.214973 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33893
	I0318 12:17:52.215304 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.215779 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.215800 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.216169 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.216796 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.216841 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.217188 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.217751 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.217803 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.220413 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.220645 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.222408 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.224752 1115022 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0318 12:17:52.226138 1115022 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0318 12:17:52.226161 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0318 12:17:52.226185 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.224934 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45799
	I0318 12:17:52.227406 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0318 12:17:52.227887 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.228453 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.228477 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.229176 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.229546 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.229625 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.230268 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.230288 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.230831 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.231398 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.231455 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.231492 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.231720 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.233574 1115022 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0318 12:17:52.231897 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.232156 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.236402 1115022 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0318 12:17:52.235013 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.235163 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.237449 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43517
	I0318 12:17:52.239150 1115022 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0318 12:17:52.238225 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.238299 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.239793 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44843
	I0318 12:17:52.240995 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.241515 1115022 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0318 12:17:52.241532 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.241827 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.242743 1115022 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0318 12:17:52.241983 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.243465 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.246137 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46665
	I0318 12:17:52.247516 1115022 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0318 12:17:52.246496 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39865
	I0318 12:17:52.246531 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36923
	I0318 12:17:52.246694 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.247011 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.247125 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.247946 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40773
	I0318 12:17:52.249949 1115022 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0318 12:17:52.248781 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.249230 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.249242 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.249605 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.250072 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.250671 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0318 12:17:52.253959 1115022 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0318 12:17:52.251411 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:52.251497 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.251980 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.252023 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.252064 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.252110 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.252403 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.253973 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45211
	I0318 12:17:52.255211 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.255293 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.255306 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.255317 1115022 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0318 12:17:52.255330 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0318 12:17:52.255356 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.255722 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.255769 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.255794 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.255850 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.255862 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.255956 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.256183 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.256235 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.256238 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.256270 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.256292 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.256358 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.256425 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.258891 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.256927 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.258937 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.258956 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.257220 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34731
	I0318 12:17:52.257261 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.258199 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.259462 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.260979 1115022 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0318 12:17:52.259615 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.259665 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.259801 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.260496 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.260709 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.261615 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.262270 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.262324 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.262407 1115022 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0318 12:17:52.262421 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0318 12:17:52.262437 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.262750 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.264298 1115022 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0318 12:17:52.262854 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.263162 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.263485 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.265127 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.265762 1115022 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 12:17:52.265782 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 12:17:52.265803 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.265848 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.265922 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.266744 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.266760 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.266800 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.268435 1115022 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0318 12:17:52.266965 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.267511 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.267594 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36779
	I0318 12:17:52.268028 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.268235 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.269987 1115022 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0318 12:17:52.269999 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0318 12:17:52.270017 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.270045 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33229
	I0318 12:17:52.270268 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.270292 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.272242 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.272318 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.272354 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.272444 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.273206 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.273228 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.273256 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.273294 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.273303 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.273310 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.273610 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.273663 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34345
	I0318 12:17:52.273821 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.274152 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.274449 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.274544 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.275034 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.275179 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.275192 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.275244 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.276807 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.276825 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.277197 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39287
	I0318 12:17:52.277626 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.277631 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.277699 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.277712 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.277854 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.279593 1115022 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0318 12:17:52.278180 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.278304 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.278318 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.278502 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.279849 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.280861 1115022 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0318 12:17:52.280876 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.280879 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0318 12:17:52.280898 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.280922 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.281690 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.281702 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.281748 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.283471 1115022 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0318 12:17:52.282012 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.282534 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:52.283850 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.284802 1115022 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 12:17:52.284815 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0318 12:17:52.284833 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.284918 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:52.286446 1115022 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:17:52.285259 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.285764 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.285980 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.287639 1115022 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:17:52.287652 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 12:17:52.287669 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.287724 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.287747 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.288709 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.288744 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.288762 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.288788 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.289041 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.289105 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40967
	I0318 12:17:52.289244 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43483
	I0318 12:17:52.289263 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.289698 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.289684 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.289705 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.289899 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.290058 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.290334 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.290408 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.290430 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.290861 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.290949 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.290969 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.291128 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.291639 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.291825 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.292909 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.294607 1115022 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0318 12:17:52.293229 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.293667 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.294099 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.295794 1115022 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0318 12:17:52.295802 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0318 12:17:52.295814 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.295846 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.295860 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.297188 1115022 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 12:17:52.296131 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.299526 1115022 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 12:17:52.298631 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.299174 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.299849 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.300062 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38153
	I0318 12:17:52.300986 1115022 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0318 12:17:52.302506 1115022 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 12:17:52.302520 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0318 12:17:52.302533 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.301013 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.301126 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.302604 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.301404 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.301542 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.302798 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39113
	I0318 12:17:52.303156 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.303196 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.303215 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.303411 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.303454 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.303616 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.303811 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.305063 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.305095 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.305710 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.305940 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.306057 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.306327 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.306348 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.306368 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.306471 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.308046 1115022 out.go:177]   - Using image docker.io/registry:2.8.3
	I0318 12:17:52.306904 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.307141 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33659
	I0318 12:17:52.308024 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.309445 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44139
	I0318 12:17:52.310772 1115022 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0318 12:17:52.309729 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.309759 1115022 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 12:17:52.309841 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.309847 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.311976 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 12:17:52.312008 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.312039 1115022 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0318 12:17:52.312053 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0318 12:17:52.312068 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.312219 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.312621 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.312641 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.312622 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.312689 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.312981 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.313035 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.313134 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.313191 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.314797 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.316715 1115022 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0318 12:17:52.316170 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.316757 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.316773 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.316187 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.316788 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.316799 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.316939 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.317012 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.318198 1115022 out.go:177]   - Using image docker.io/busybox:stable
	I0318 12:17:52.319724 1115022 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 12:17:52.319743 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0318 12:17:52.319758 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.318345 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.318362 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.319958 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.319968 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.320122 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.320137 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	W0318 12:17:52.321603 1115022 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:53688->192.168.39.94:22: read: connection reset by peer
	I0318 12:17:52.321633 1115022 retry.go:31] will retry after 245.409048ms: ssh: handshake failed: read tcp 192.168.39.1:53688->192.168.39.94:22: read: connection reset by peer
	I0318 12:17:52.322986 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.323423 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.323451 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.323585 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.323757 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.323921 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.324077 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.324864 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38611
	I0318 12:17:52.325235 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:52.325748 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:52.325764 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:52.326831 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:52.327078 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:52.328604 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:52.330501 1115022 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0318 12:17:52.331880 1115022 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 12:17:52.331895 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0318 12:17:52.331909 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:52.334620 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.334937 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:52.334975 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:52.335132 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:52.335340 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:52.335480 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:52.335626 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:52.604753 1115022 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:17:52.605158 1115022 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 12:17:52.700971 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0318 12:17:52.731288 1115022 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0318 12:17:52.731310 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0318 12:17:52.744030 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:17:52.751842 1115022 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 12:17:52.751860 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0318 12:17:52.765502 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0318 12:17:52.783860 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 12:17:52.790953 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0318 12:17:52.792426 1115022 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0318 12:17:52.792455 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0318 12:17:52.806117 1115022 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0318 12:17:52.806142 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0318 12:17:52.872207 1115022 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 12:17:52.872242 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0318 12:17:52.877903 1115022 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0318 12:17:52.877922 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0318 12:17:52.879718 1115022 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0318 12:17:52.879746 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0318 12:17:52.892354 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0318 12:17:52.901473 1115022 node_ready.go:35] waiting up to 6m0s for node "addons-015389" to be "Ready" ...
	I0318 12:17:52.905377 1115022 node_ready.go:49] node "addons-015389" has status "Ready":"True"
	I0318 12:17:52.905399 1115022 node_ready.go:38] duration metric: took 3.89093ms for node "addons-015389" to be "Ready" ...
	I0318 12:17:52.905408 1115022 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:17:52.914385 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0318 12:17:52.918627 1115022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-57qjd" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:52.963558 1115022 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0318 12:17:52.963583 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0318 12:17:52.980875 1115022 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 12:17:52.980911 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 12:17:53.035004 1115022 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0318 12:17:53.035032 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0318 12:17:53.040283 1115022 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0318 12:17:53.040312 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0318 12:17:53.135863 1115022 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0318 12:17:53.135895 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0318 12:17:53.160134 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0318 12:17:53.162344 1115022 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0318 12:17:53.162380 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0318 12:17:53.245161 1115022 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0318 12:17:53.245193 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0318 12:17:53.265587 1115022 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 12:17:53.265611 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 12:17:53.361301 1115022 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0318 12:17:53.361330 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0318 12:17:53.382682 1115022 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0318 12:17:53.382711 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0318 12:17:53.455949 1115022 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0318 12:17:53.455975 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0318 12:17:53.505885 1115022 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0318 12:17:53.505916 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0318 12:17:53.555469 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 12:17:53.590798 1115022 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0318 12:17:53.590830 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0318 12:17:53.686685 1115022 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0318 12:17:53.686714 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0318 12:17:53.689104 1115022 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0318 12:17:53.689126 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0318 12:17:53.715831 1115022 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0318 12:17:53.715858 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0318 12:17:53.823142 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0318 12:17:54.066505 1115022 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0318 12:17:54.066533 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0318 12:17:54.086825 1115022 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 12:17:54.086861 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0318 12:17:54.152605 1115022 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0318 12:17:54.152644 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0318 12:17:54.228530 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0318 12:17:54.428896 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 12:17:54.453878 1115022 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0318 12:17:54.453904 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0318 12:17:54.482984 1115022 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0318 12:17:54.483017 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0318 12:17:54.555793 1115022 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 12:17:54.555826 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0318 12:17:54.646385 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0318 12:17:54.651279 1115022 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0318 12:17:54.651308 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0318 12:17:54.739366 1115022 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0318 12:17:54.739392 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0318 12:17:54.816783 1115022 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0318 12:17:54.816812 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0318 12:17:54.921373 1115022 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 12:17:54.921408 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0318 12:17:55.225164 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0318 12:17:55.638920 1115022 pod_ready.go:102] pod "coredns-5dd5756b68-57qjd" in "kube-system" namespace has status "Ready":"False"
	I0318 12:17:56.927743 1115022 pod_ready.go:92] pod "coredns-5dd5756b68-57qjd" in "kube-system" namespace has status "Ready":"True"
	I0318 12:17:56.927772 1115022 pod_ready.go:81] duration metric: took 4.009113993s for pod "coredns-5dd5756b68-57qjd" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:56.927786 1115022 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nzmx6" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:57.918990 1115022 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.313769191s)
	I0318 12:17:57.919026 1115022 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 12:17:57.943145 1115022 pod_ready.go:92] pod "coredns-5dd5756b68-nzmx6" in "kube-system" namespace has status "Ready":"True"
	I0318 12:17:57.943175 1115022 pod_ready.go:81] duration metric: took 1.015378408s for pod "coredns-5dd5756b68-nzmx6" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:57.943188 1115022 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-015389" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:57.975153 1115022 pod_ready.go:92] pod "etcd-addons-015389" in "kube-system" namespace has status "Ready":"True"
	I0318 12:17:57.975180 1115022 pod_ready.go:81] duration metric: took 31.984487ms for pod "etcd-addons-015389" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:57.975195 1115022 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-015389" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:58.015312 1115022 pod_ready.go:92] pod "kube-apiserver-addons-015389" in "kube-system" namespace has status "Ready":"True"
	I0318 12:17:58.015343 1115022 pod_ready.go:81] duration metric: took 40.139609ms for pod "kube-apiserver-addons-015389" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:58.015359 1115022 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-015389" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:58.033194 1115022 pod_ready.go:92] pod "kube-controller-manager-addons-015389" in "kube-system" namespace has status "Ready":"True"
	I0318 12:17:58.033221 1115022 pod_ready.go:81] duration metric: took 17.853262ms for pod "kube-controller-manager-addons-015389" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:58.033235 1115022 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bqn6c" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:58.123824 1115022 pod_ready.go:92] pod "kube-proxy-bqn6c" in "kube-system" namespace has status "Ready":"True"
	I0318 12:17:58.123860 1115022 pod_ready.go:81] duration metric: took 90.616174ms for pod "kube-proxy-bqn6c" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:58.123875 1115022 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-015389" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:58.663025 1115022 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-015389" context rescaled to 1 replicas
	I0318 12:17:58.677234 1115022 pod_ready.go:92] pod "kube-scheduler-addons-015389" in "kube-system" namespace has status "Ready":"True"
	I0318 12:17:58.677260 1115022 pod_ready.go:81] duration metric: took 553.37465ms for pod "kube-scheduler-addons-015389" in "kube-system" namespace to be "Ready" ...
	I0318 12:17:58.677268 1115022 pod_ready.go:38] duration metric: took 5.771849059s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:17:58.677283 1115022 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:17:58.677333 1115022 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:17:58.942706 1115022 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0318 12:17:58.942757 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:58.946398 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:58.946994 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:58.947029 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:58.947253 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:58.947501 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:58.947664 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:58.947811 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:17:59.712921 1115022 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0318 12:17:59.885649 1115022 addons.go:234] Setting addon gcp-auth=true in "addons-015389"
	I0318 12:17:59.885719 1115022 host.go:66] Checking if "addons-015389" exists ...
	I0318 12:17:59.886153 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:59.886195 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:59.902278 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36887
	I0318 12:17:59.902785 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:59.903371 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:59.903405 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:59.903769 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:59.904399 1115022 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:17:59.904437 1115022 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:17:59.919945 1115022 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35193
	I0318 12:17:59.920427 1115022 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:17:59.920975 1115022 main.go:141] libmachine: Using API Version  1
	I0318 12:17:59.921008 1115022 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:17:59.921343 1115022 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:17:59.921554 1115022 main.go:141] libmachine: (addons-015389) Calling .GetState
	I0318 12:17:59.923208 1115022 main.go:141] libmachine: (addons-015389) Calling .DriverName
	I0318 12:17:59.923452 1115022 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0318 12:17:59.923479 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHHostname
	I0318 12:17:59.926396 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:59.926830 1115022 main.go:141] libmachine: (addons-015389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d6:99:5d", ip: ""} in network mk-addons-015389: {Iface:virbr1 ExpiryTime:2024-03-18 13:17:11 +0000 UTC Type:0 Mac:52:54:00:d6:99:5d Iaid: IPaddr:192.168.39.94 Prefix:24 Hostname:addons-015389 Clientid:01:52:54:00:d6:99:5d}
	I0318 12:17:59.926862 1115022 main.go:141] libmachine: (addons-015389) DBG | domain addons-015389 has defined IP address 192.168.39.94 and MAC address 52:54:00:d6:99:5d in network mk-addons-015389
	I0318 12:17:59.927033 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHPort
	I0318 12:17:59.927288 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHKeyPath
	I0318 12:17:59.927461 1115022 main.go:141] libmachine: (addons-015389) Calling .GetSSHUsername
	I0318 12:17:59.927625 1115022 sshutil.go:53] new ssh client: &{IP:192.168.39.94 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/addons-015389/id_rsa Username:docker}
	I0318 12:18:02.675492 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.974457768s)
	I0318 12:18:02.675543 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.931482522s)
	I0318 12:18:02.675568 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.675585 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.675602 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.675641 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.910095288s)
	I0318 12:18:02.675586 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.675678 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.675689 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.675719 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.891828843s)
	I0318 12:18:02.675740 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.884764751s)
	I0318 12:18:02.675752 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.675766 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.675776 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.675783 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.783402576s)
	I0318 12:18:02.675800 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.675810 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.675823 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.675893 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (9.761481941s)
	I0318 12:18:02.675915 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.675922 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.676017 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.515849586s)
	I0318 12:18:02.676033 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.676041 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.676139 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.120637928s)
	I0318 12:18:02.676157 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.676165 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.676195 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.676225 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.853044544s)
	I0318 12:18:02.676241 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.676253 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.676265 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.676271 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.676275 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.676280 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.676283 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.676285 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.676293 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.447726806s)
	I0318 12:18:02.676298 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.676297 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.676249 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.676307 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.676309 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.676314 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.676319 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.676321 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.676232 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.676314 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.676384 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.676393 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.676400 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.676425 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.676248 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.676434 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.676442 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.676448 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.676483 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.676529 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.676549 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.676573 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.676579 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.676586 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.676594 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.676913 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.247953362s)
	W0318 12:18:02.676954 1115022 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 12:18:02.677000 1115022 retry.go:31] will retry after 294.22188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0318 12:18:02.677123 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.030661385s)
	I0318 12:18:02.677156 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.677165 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.677522 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.677552 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.677559 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.677571 1115022 addons.go:470] Verifying addon ingress=true in "addons-015389"
	I0318 12:18:02.681377 1115022 out.go:177] * Verifying ingress addon...
	I0318 12:18:02.676289 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.681517 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.682763 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.679564 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.679581 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.679606 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.682848 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.682865 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.682874 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.679621 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.679639 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.682947 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.682970 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.682981 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.682989 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.679657 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.679689 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.679706 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.683051 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.679674 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.679734 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.679752 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.679767 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.679780 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.679796 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.679815 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.679831 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.679858 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.682991 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.683105 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.683109 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.683114 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.683124 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.683130 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.683138 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.683148 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.683161 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.683169 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.679719 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.683089 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.683115 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.683098 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.683198 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.683229 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.683184 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.683237 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.683245 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.685212 1115022 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-015389 service yakd-dashboard -n yakd-dashboard
	
	I0318 12:18:02.684096 1115022 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0318 12:18:02.686066 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.686070 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.686757 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.686773 1115022 addons.go:470] Verifying addon registry=true in "addons-015389"
	I0318 12:18:02.686076 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.686080 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.686081 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.686088 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.686091 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.686099 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.686100 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.686110 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.688466 1115022 out.go:177] * Verifying registry addon...
	I0318 12:18:02.688491 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.690421 1115022 addons.go:470] Verifying addon metrics-server=true in "addons-015389"
	I0318 12:18:02.688484 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.688501 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.688510 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.691283 1115022 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0318 12:18:02.731491 1115022 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0318 12:18:02.731512 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:02.731623 1115022 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0318 12:18:02.731647 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:02.747217 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.747249 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.747562 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.747578 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	W0318 12:18:02.747714 1115022 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0318 12:18:02.757090 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:02.757113 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:02.757439 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:02.757460 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:02.757472 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:02.971764 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0318 12:18:03.216180 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:03.217889 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:03.702840 1115022 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.025479886s)
	I0318 12:18:03.702887 1115022 api_server.go:72] duration metric: took 11.548231471s to wait for apiserver process to appear ...
	I0318 12:18:03.702896 1115022 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:18:03.702919 1115022 api_server.go:253] Checking apiserver healthz at https://192.168.39.94:8443/healthz ...
	I0318 12:18:03.702922 1115022 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.779441862s)
	I0318 12:18:03.702942 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.47772649s)
	I0318 12:18:03.702996 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:03.703010 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:03.704459 1115022 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0318 12:18:03.703300 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:03.703325 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:03.705693 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:03.705710 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:03.705719 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:03.707013 1115022 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0318 12:18:03.708415 1115022 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0318 12:18:03.706124 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:03.706140 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:03.708477 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:03.708496 1115022 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-015389"
	I0318 12:18:03.708439 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0318 12:18:03.709800 1115022 out.go:177] * Verifying csi-hostpath-driver addon...
	I0318 12:18:03.711999 1115022 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0318 12:18:03.727061 1115022 api_server.go:279] https://192.168.39.94:8443/healthz returned 200:
	ok
	I0318 12:18:03.748067 1115022 api_server.go:141] control plane version: v1.28.4
	I0318 12:18:03.748104 1115022 api_server.go:131] duration metric: took 45.199664ms to wait for apiserver health ...
	I0318 12:18:03.748116 1115022 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:18:03.801103 1115022 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0318 12:18:03.801135 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0318 12:18:03.817693 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:03.818011 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:03.818227 1115022 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0318 12:18:03.818246 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:03.856452 1115022 system_pods.go:59] 19 kube-system pods found
	I0318 12:18:03.856488 1115022 system_pods.go:61] "coredns-5dd5756b68-57qjd" [8186c1c0-2699-4797-9153-c88651831be4] Running
	I0318 12:18:03.856493 1115022 system_pods.go:61] "coredns-5dd5756b68-nzmx6" [f5acb88a-85cc-454a-a08a-d91c5804bc13] Running
	I0318 12:18:03.856500 1115022 system_pods.go:61] "csi-hostpath-attacher-0" [763569e3-300f-4787-8125-b402081ba9a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0318 12:18:03.856507 1115022 system_pods.go:61] "csi-hostpath-resizer-0" [595a24f3-631f-4a28-96bc-00625e1bb23d] Pending
	I0318 12:18:03.856517 1115022 system_pods.go:61] "csi-hostpathplugin-vlzz2" [5484ed11-1652-41fc-97c8-d220711fbade] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 12:18:03.856524 1115022 system_pods.go:61] "etcd-addons-015389" [66740d94-9759-4f22-8dd2-1661ee837bb1] Running
	I0318 12:18:03.856530 1115022 system_pods.go:61] "kube-apiserver-addons-015389" [63cb0c67-f49e-4c54-a8b4-f792a1ddf846] Running
	I0318 12:18:03.856535 1115022 system_pods.go:61] "kube-controller-manager-addons-015389" [74e676eb-e05c-499d-b9a3-d2a5ea123ae4] Running
	I0318 12:18:03.856543 1115022 system_pods.go:61] "kube-ingress-dns-minikube" [7aadaefe-59f6-43bd-9893-7ddefbdf53dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0318 12:18:03.856549 1115022 system_pods.go:61] "kube-proxy-bqn6c" [2ee1682f-85b7-46f1-9cc4-840c7af8fbc4] Running
	I0318 12:18:03.856555 1115022 system_pods.go:61] "kube-scheduler-addons-015389" [66eddbd4-b79f-4b99-b37a-1d8b5d3427e5] Running
	I0318 12:18:03.856562 1115022 system_pods.go:61] "metrics-server-69cf46c98-c6t4v" [14a9220e-3a89-4062-b0df-279973f4adc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 12:18:03.856576 1115022 system_pods.go:61] "nvidia-device-plugin-daemonset-rpkgk" [a76e38d5-f838-4358-b060-2fa48dc532cf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0318 12:18:03.856583 1115022 system_pods.go:61] "registry-84z6v" [f64607a4-6b93-41a9-847c-302045deae1e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0318 12:18:03.856593 1115022 system_pods.go:61] "registry-proxy-6vjv7" [7560be1a-4c63-4d89-9c1d-4654710cb74a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0318 12:18:03.856599 1115022 system_pods.go:61] "snapshot-controller-58dbcc7b99-695bj" [0b14ba9e-7a94-4045-bbcb-ac2278c08963] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:18:03.856608 1115022 system_pods.go:61] "snapshot-controller-58dbcc7b99-shr2p" [a7787f9e-e2fa-4edb-b36c-9153b70842bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:18:03.856612 1115022 system_pods.go:61] "storage-provisioner" [634e77d2-a06f-4449-809e-42ef7bf1fe64] Running
	I0318 12:18:03.856618 1115022 system_pods.go:61] "tiller-deploy-7b677967b9-b7fbk" [d35fb47f-9f63-4399-a25d-e920631e2d0a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0318 12:18:03.856626 1115022 system_pods.go:74] duration metric: took 108.503336ms to wait for pod list to return data ...
	I0318 12:18:03.856636 1115022 default_sa.go:34] waiting for default service account to be created ...
	I0318 12:18:03.912267 1115022 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 12:18:03.912289 1115022 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0318 12:18:03.915205 1115022 default_sa.go:45] found service account: "default"
	I0318 12:18:03.915227 1115022 default_sa.go:55] duration metric: took 58.584442ms for default service account to be created ...
	I0318 12:18:03.915237 1115022 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 12:18:03.966396 1115022 system_pods.go:86] 19 kube-system pods found
	I0318 12:18:03.966441 1115022 system_pods.go:89] "coredns-5dd5756b68-57qjd" [8186c1c0-2699-4797-9153-c88651831be4] Running
	I0318 12:18:03.966446 1115022 system_pods.go:89] "coredns-5dd5756b68-nzmx6" [f5acb88a-85cc-454a-a08a-d91c5804bc13] Running
	I0318 12:18:03.966455 1115022 system_pods.go:89] "csi-hostpath-attacher-0" [763569e3-300f-4787-8125-b402081ba9a5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0318 12:18:03.966462 1115022 system_pods.go:89] "csi-hostpath-resizer-0" [595a24f3-631f-4a28-96bc-00625e1bb23d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0318 12:18:03.966470 1115022 system_pods.go:89] "csi-hostpathplugin-vlzz2" [5484ed11-1652-41fc-97c8-d220711fbade] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0318 12:18:03.966477 1115022 system_pods.go:89] "etcd-addons-015389" [66740d94-9759-4f22-8dd2-1661ee837bb1] Running
	I0318 12:18:03.966482 1115022 system_pods.go:89] "kube-apiserver-addons-015389" [63cb0c67-f49e-4c54-a8b4-f792a1ddf846] Running
	I0318 12:18:03.966487 1115022 system_pods.go:89] "kube-controller-manager-addons-015389" [74e676eb-e05c-499d-b9a3-d2a5ea123ae4] Running
	I0318 12:18:03.966493 1115022 system_pods.go:89] "kube-ingress-dns-minikube" [7aadaefe-59f6-43bd-9893-7ddefbdf53dc] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0318 12:18:03.966500 1115022 system_pods.go:89] "kube-proxy-bqn6c" [2ee1682f-85b7-46f1-9cc4-840c7af8fbc4] Running
	I0318 12:18:03.966505 1115022 system_pods.go:89] "kube-scheduler-addons-015389" [66eddbd4-b79f-4b99-b37a-1d8b5d3427e5] Running
	I0318 12:18:03.966514 1115022 system_pods.go:89] "metrics-server-69cf46c98-c6t4v" [14a9220e-3a89-4062-b0df-279973f4adc4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 12:18:03.966522 1115022 system_pods.go:89] "nvidia-device-plugin-daemonset-rpkgk" [a76e38d5-f838-4358-b060-2fa48dc532cf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0318 12:18:03.966533 1115022 system_pods.go:89] "registry-84z6v" [f64607a4-6b93-41a9-847c-302045deae1e] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0318 12:18:03.966539 1115022 system_pods.go:89] "registry-proxy-6vjv7" [7560be1a-4c63-4d89-9c1d-4654710cb74a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0318 12:18:03.966546 1115022 system_pods.go:89] "snapshot-controller-58dbcc7b99-695bj" [0b14ba9e-7a94-4045-bbcb-ac2278c08963] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:18:03.966555 1115022 system_pods.go:89] "snapshot-controller-58dbcc7b99-shr2p" [a7787f9e-e2fa-4edb-b36c-9153b70842bc] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0318 12:18:03.966560 1115022 system_pods.go:89] "storage-provisioner" [634e77d2-a06f-4449-809e-42ef7bf1fe64] Running
	I0318 12:18:03.966566 1115022 system_pods.go:89] "tiller-deploy-7b677967b9-b7fbk" [d35fb47f-9f63-4399-a25d-e920631e2d0a] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0318 12:18:03.966573 1115022 system_pods.go:126] duration metric: took 51.331272ms to wait for k8s-apps to be running ...
	I0318 12:18:03.966584 1115022 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:18:03.966633 1115022 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:18:04.069894 1115022 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0318 12:18:04.192880 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:04.210856 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:04.231770 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:04.693951 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:04.698022 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:04.717776 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:05.205107 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:05.205207 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:05.221028 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:05.697084 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:05.699864 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:05.718169 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:05.898044 1115022 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.931383166s)
	I0318 12:18:05.898087 1115022 system_svc.go:56] duration metric: took 1.931497061s WaitForService to wait for kubelet
	I0318 12:18:05.898098 1115022 kubeadm.go:576] duration metric: took 13.74344214s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:18:05.898126 1115022 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:18:05.898296 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.926474334s)
	I0318 12:18:05.898364 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:05.898388 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:05.898706 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:05.898806 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:05.898822 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:05.898848 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:05.898862 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:05.899162 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:05.899181 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:05.901543 1115022 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:18:05.901568 1115022 node_conditions.go:123] node cpu capacity is 2
	I0318 12:18:05.901582 1115022 node_conditions.go:105] duration metric: took 3.450478ms to run NodePressure ...
	I0318 12:18:05.901596 1115022 start.go:240] waiting for startup goroutines ...
	I0318 12:18:06.216089 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:06.258421 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:06.276151 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:06.401651 1115022 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.33170197s)
	I0318 12:18:06.401728 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:06.401751 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:06.402266 1115022 main.go:141] libmachine: (addons-015389) DBG | Closing plugin on server side
	I0318 12:18:06.402302 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:06.402319 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:06.402333 1115022 main.go:141] libmachine: Making call to close driver server
	I0318 12:18:06.402346 1115022 main.go:141] libmachine: (addons-015389) Calling .Close
	I0318 12:18:06.402577 1115022 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:18:06.402594 1115022 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:18:06.404450 1115022 addons.go:470] Verifying addon gcp-auth=true in "addons-015389"
	I0318 12:18:06.406046 1115022 out.go:177] * Verifying gcp-auth addon...
	I0318 12:18:06.408127 1115022 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0318 12:18:06.426032 1115022 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0318 12:18:06.426053 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:06.692615 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:06.701900 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:06.718738 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:06.912886 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:07.196082 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:07.197766 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:07.225443 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:07.412117 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:07.697548 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:07.699638 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:07.717797 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:07.913289 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:08.192377 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:08.195062 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:08.217172 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:08.411782 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:08.693064 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:08.696045 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:08.717591 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:08.913694 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:09.193597 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:09.197676 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:09.217426 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:09.413084 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:10.023575 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:10.023654 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:10.024431 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:10.027306 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:10.194557 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:10.196178 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:10.217535 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:10.412906 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:10.694483 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:10.696393 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:10.720267 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:10.912557 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:11.194546 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:11.196790 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:11.221241 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:11.413058 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:11.692781 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:11.696285 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:11.718546 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:11.912996 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:12.194196 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:12.196645 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:12.217359 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:12.412507 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:12.694113 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:12.696834 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:12.717740 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:12.912751 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:13.193518 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:13.197054 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:13.217253 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:13.413497 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:13.693918 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:13.697414 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:13.718668 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:13.912921 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:14.192898 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:14.196067 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:14.220079 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:14.413967 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:14.692543 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:14.695898 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:14.717595 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:14.912431 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:15.193446 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:15.196947 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:15.217906 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:15.412072 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:15.692350 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:15.695250 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:15.717888 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:15.929112 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:16.193576 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:16.196926 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:16.217395 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:16.412543 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:16.692942 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:16.696596 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:16.718316 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:16.913005 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:17.193760 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:17.199155 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:17.218248 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:17.412293 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:17.693185 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:17.697862 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:17.717392 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:17.912846 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:18.192537 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:18.197708 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:18.222197 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:18.411920 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:18.692651 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:18.695542 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:18.717212 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:18.912246 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:19.192949 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:19.196341 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:19.223453 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:19.412989 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:19.692818 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:19.695860 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:19.717886 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:19.914088 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:20.194276 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:20.197519 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:20.219845 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:20.413224 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:20.692552 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:20.695679 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:20.717344 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:20.913139 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:21.192805 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:21.196594 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:21.218584 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:21.412456 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:21.693293 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:21.696907 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:21.717766 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:21.912922 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:22.201043 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:22.201065 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:22.217161 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:22.412343 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:22.692994 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:22.697169 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:22.718592 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:22.914344 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:23.192860 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:23.195559 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:23.217370 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:23.413149 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:23.693844 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:23.698729 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:23.720947 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:23.912496 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:24.194037 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:24.200890 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:24.222716 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:24.412302 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:24.693219 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:24.696577 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:24.717448 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:24.912274 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:25.193088 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:25.196314 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:25.218132 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:25.417794 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:25.693436 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:25.696849 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:25.717439 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:25.913238 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:26.194485 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:26.198852 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:26.218297 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:26.412067 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:26.696731 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:26.702018 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:26.723000 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:26.912384 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:27.193297 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:27.196721 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:27.217697 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:27.413409 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:27.693360 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:27.701200 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:27.717889 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:27.912839 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:28.194176 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:28.198017 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:28.219660 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:28.413006 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:28.692540 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:28.702876 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:28.719119 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:29.181510 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:29.196189 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:29.198750 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:29.218408 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:29.416185 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:29.695201 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:29.697612 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:29.716655 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:29.912840 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:30.192541 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:30.195921 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:30.217589 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:30.412552 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:30.692889 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:30.696926 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:30.718149 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:30.912877 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:31.194148 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:31.204977 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:31.235393 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:31.412307 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:31.693415 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:31.696994 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:31.720391 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:31.911822 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:32.193427 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:32.196195 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:32.218358 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:32.412466 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:32.692972 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:32.696823 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:32.718426 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:32.913955 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:33.192770 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:33.196625 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:33.225218 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:33.412053 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:33.696322 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:33.704440 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:33.718137 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:33.912925 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:34.194448 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:34.197342 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:34.217939 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:34.411973 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:34.696065 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:34.702075 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:34.719751 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:34.915266 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:35.195899 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:35.196539 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:35.217311 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:35.417146 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:35.692869 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:35.696531 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:35.717179 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:35.912202 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:36.196319 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:36.198792 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:36.218379 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:36.421774 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:36.698307 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:36.700982 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:36.720456 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:36.913257 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:37.194730 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:37.200218 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:37.219220 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:37.411798 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:37.693363 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:37.700021 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:37.717536 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:37.913038 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:38.325145 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:38.326748 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:38.326815 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:38.412777 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:38.693812 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:38.700577 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:38.718203 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:38.912072 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:39.192935 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:39.196313 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:39.218837 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:39.412620 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:39.695565 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:39.697217 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:39.717904 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:39.913007 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:40.195466 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:40.208100 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:40.217625 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:40.413002 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:40.699621 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:40.699807 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:40.728686 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:40.912582 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:41.196125 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:41.200486 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:41.217985 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:41.413163 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:41.693836 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:41.695816 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:41.717913 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:41.911892 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:42.193922 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:42.197814 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0318 12:18:42.219539 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:42.412550 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:42.693586 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:42.696305 1115022 kapi.go:107] duration metric: took 40.005021073s to wait for kubernetes.io/minikube-addons=registry ...
	I0318 12:18:42.717552 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:42.912828 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:43.192108 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:43.226249 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:43.412875 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:43.696446 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:43.717977 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:43.912815 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:44.192370 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:44.218029 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:44.412551 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:44.695355 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:44.719420 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:44.913696 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:45.193490 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:45.219898 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:45.411890 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:45.693007 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:45.718372 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:45.915390 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:46.193299 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:46.218093 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:46.413000 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:46.693371 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:46.720598 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:46.912869 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:47.399453 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:47.401952 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:47.411698 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:47.697438 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:47.719964 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:47.911894 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:48.193125 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:48.219642 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:48.412280 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:48.693026 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:48.718785 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:48.912095 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:49.198456 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:49.221322 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:49.414397 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:49.694523 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:49.722060 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:49.912545 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:50.193135 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:50.225980 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:50.413070 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:50.695994 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:50.719500 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:50.912809 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:51.195285 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:51.218947 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:51.412043 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:51.692348 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:51.718645 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:51.912995 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:52.194725 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:52.223664 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:52.413135 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:52.692947 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:52.718245 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:52.913117 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:53.193314 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:53.218294 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:53.412746 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:53.693199 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:53.717681 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:53.912342 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:54.192977 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:54.218019 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:54.413043 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:54.696178 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:54.718240 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:54.912346 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:55.193267 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:55.219727 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:55.413099 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:55.692925 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:55.718092 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:55.911923 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:56.196404 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:56.218454 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:56.413485 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:56.693229 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:56.718649 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:56.914171 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:57.194450 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:57.217939 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:57.422054 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:57.694118 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:57.721032 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:57.912584 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:58.195009 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:58.217473 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:58.413105 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:58.694226 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:58.718475 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:58.912471 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:59.194422 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:59.218925 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:59.412521 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:18:59.694885 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:18:59.717241 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:18:59.912708 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:00.193293 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:00.221619 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:00.412808 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:00.693976 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:00.718828 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:00.912368 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:01.193125 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:01.217185 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:01.412278 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:01.693065 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:01.718541 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:02.012038 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:02.199001 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:02.220049 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:02.412375 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:02.693240 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:02.718081 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:02.911972 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:03.192748 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:03.221339 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:03.416124 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:03.693570 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:03.719856 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:03.912374 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:04.193362 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:04.218473 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:04.412405 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:04.693140 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:04.718195 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:04.912201 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:05.197358 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:05.218348 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:05.412264 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:05.705012 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:05.719374 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:05.912027 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:06.193142 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:06.218520 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:06.412701 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:06.702419 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:06.723785 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:06.912702 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:07.193917 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:07.218151 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:07.415447 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:07.693897 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:07.717348 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:07.912682 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:08.195933 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:08.217359 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:08.412858 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:08.693474 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:08.719604 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:08.913056 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:09.192939 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:09.224006 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:09.412902 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:09.696271 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:09.718362 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:09.911705 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:10.197435 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:10.238287 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:10.412861 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:10.692808 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:10.718130 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:10.913352 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:11.195145 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:11.218520 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:11.417770 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:11.693426 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:11.717780 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:11.915981 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:12.195033 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:12.221244 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:12.413595 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:12.705422 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:12.719764 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:12.912627 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:13.193059 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:13.224675 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:13.412820 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:13.693090 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:13.719135 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:13.912360 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:14.193239 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:14.218071 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:14.412777 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:15.048083 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:15.048971 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:15.049485 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:15.193184 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:15.218732 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:15.413118 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:15.692710 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:15.718811 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:15.913189 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:16.193019 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:16.217806 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:16.413474 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:16.693055 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:16.718805 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:16.914069 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:17.194469 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:17.229124 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:17.412996 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:17.946447 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:17.946768 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:17.947136 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:18.193069 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:18.220037 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:18.417308 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:18.692713 1115022 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0318 12:19:18.717900 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:18.912508 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:19.193574 1115022 kapi.go:107] duration metric: took 1m16.509476206s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0318 12:19:19.222430 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:19.415569 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:19.720619 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:19.913064 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:20.219287 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:20.412014 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:20.719661 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:20.913486 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:21.219466 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:21.412349 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:21.719102 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:21.912890 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:22.217185 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:22.412056 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:22.717836 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:22.912273 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:23.226470 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:23.413441 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:23.718513 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:23.912657 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0318 12:19:24.218682 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:24.412738 1115022 kapi.go:107] duration metric: took 1m18.00460566s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0318 12:19:24.414356 1115022 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-015389 cluster.
	I0318 12:19:24.416080 1115022 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0318 12:19:24.417352 1115022 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0318 12:19:24.722545 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:25.219277 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:25.720842 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:26.218453 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:26.718613 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:27.224717 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:27.737290 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:28.218363 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:28.718994 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:29.217598 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:29.718136 1115022 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0318 12:19:30.221768 1115022 kapi.go:107] duration metric: took 1m26.509764524s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0318 12:19:30.223465 1115022 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, yakd, metrics-server, nvidia-device-plugin, inspektor-gadget, helm-tiller, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0318 12:19:30.224831 1115022 addons.go:505] duration metric: took 1m38.070148329s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner yakd metrics-server nvidia-device-plugin inspektor-gadget helm-tiller storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0318 12:19:30.224872 1115022 start.go:245] waiting for cluster config update ...
	I0318 12:19:30.224892 1115022 start.go:254] writing updated cluster config ...
	I0318 12:19:30.225180 1115022 ssh_runner.go:195] Run: rm -f paused
	I0318 12:19:30.282816 1115022 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 12:19:30.284570 1115022 out.go:177] * Done! kubectl is now configured to use "addons-015389" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.400242026Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a997b183-de84-4f96-b66d-002329f521ee name=/runtime.v1.RuntimeService/Version
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.407067866Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3b109e7d-125e-4bd8-979e-21481ba779ec name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.407768023Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a44702a8-be90-4eff-a297-91a63ce878d9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.408131877Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4229080d7df335d9ec9a36a76d94f57708abb369501170795d40e1a3434047b8,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-2r7sr,Uid:4e384ba7-781b-4ee6-bbdd-de3cffa849ed,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710764548607781526,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-2r7sr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e384ba7-781b-4ee6-bbdd-de3cffa849ed,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T12:22:28.297637243Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4344b5e5eb17500912cb8bb8669a52b13fc98b07ecde9b81dddc49695f8d210b,Metadata:&PodSandboxMetadata{Name:nginx,Uid:0fbd4ea0-863b-41bc-b038-eb32bc6f8df0,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1710764402733205122,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0fbd4ea0-863b-41bc-b038-eb32bc6f8df0,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T12:20:02.375959535Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d7ce0361ad64ce828fbd21d90198843c3ccd506e97a1d6f80f138d2607052d9,Metadata:&PodSandboxMetadata{Name:headlamp-5485c556b-dldvc,Uid:1f47d369-0ecf-4187-90d7-d83d291ad4c3,Namespace:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710764389413506902,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-5485c556b-dldvc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 1f47d369-0ecf-4187-90d7-d83d291ad4c3,pod-template-hash: 5485c556b,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-
18T12:19:49.099253755Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:db4ca2f319a4a5e325ea5623aabb981f64948a41142ef867f0921b06aee57baf,Metadata:&PodSandboxMetadata{Name:gcp-auth-7d69788767-kghrb,Uid:55d99b90-d227-4668-ab36-92bbf71ea9f0,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710764350580625062,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-7d69788767-kghrb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 55d99b90-d227-4668-ab36-92bbf71ea9f0,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 7d69788767,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T12:18:06.215302654Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:81db120d5ccb5488b60770eea3250ec934c067c47e5f8ff631dcb2fcc46122ce,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-76dc478dd8-77kql,Uid:cdf11b41-bccc-436b-8992-e36f81c00621,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTRE
ADY,CreatedAt:1710764347334241162,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-76dc478dd8-77kql,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cdf11b41-bccc-436b-8992-e36f81c00621,pod-template-hash: 76dc478dd8,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T12:18:02.521051868Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3a79aeaa103b355045c11cdf6db58b65cf9cc86856cf16ae7072696bc2aad64a,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-6rzxd,Uid:51f4af1b-be13-4f8f-bb8f-f2737d85d602,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710764282943278375,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kube
rnetes.io/controller-uid: 5d9b3bf7-0f24-4b26-a143-4d72112c125e,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: 5d9b3bf7-0f24-4b26-a143-4d72112c125e,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-6rzxd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51f4af1b-be13-4f8f-bb8f-f2737d85d602,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T12:18:02.629038545Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:88bb1d8bff1609a1abbabc2275742c891cf268223a929506de58d69762821ad1,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-x25c7,Uid:6f404b63-071f-452e-91b6-0a4007f2b89a,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710764282918934724,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-uid:
1675c787-e9d0-4ca1-8bc3-f5936cf7807d,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 1675c787-e9d0-4ca1-8bc3-f5936cf7807d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-x25c7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f404b63-071f-452e-91b6-0a4007f2b89a,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T12:18:02.610995882Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:16867d16e29fd84718b82c2bde5eeb73945ebe819c0d0b6561a378ab82ddd0b9,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-9947fc6bf-c75dl,Uid:5d0ea5b5-2aa5-4352-8953-b4ecce9ad581,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710764281304994300,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-
c75dl,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 5d0ea5b5-2aa5-4352-8953-b4ecce9ad581,pod-template-hash: 9947fc6bf,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T12:18:00.945112933Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8d4844765f4a8e86cb83919cd92bdc71f9a3fcd5b30e36555e60bb20848251cb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:634e77d2-a06f-4449-809e-42ef7bf1fe64,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710764280784752650,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 634e77d2-a06f-4449-809e-42ef7bf1fe64,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mo
de\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T12:18:00.462875808Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9676b7b5d95523d62b9933e61f35555eac0f5887fa65ce6c0154a293f0add2f8,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:7aadaefe-59f6-43bd-9893-7ddefbdf53dc,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710764279532428674,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7aadaefe-59f6-43bd-9893-7ddefbdf53dc,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-03
-18T12:17:58.882173615Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ae039dc14e9f49ea7be7f6c00dffa0fd2ee86e56925e9fe61085c3a3eb5f3f8a,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-57qjd,Uid:8186c1c0-2699-4797-9153-c88651831be4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710764273193381593,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-57qjd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8186c1c0-2699-4797-9153-c88651831be4,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T12:17:51.978426493Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:879ebcd0220cad7636760adeb246fb712fb109a2c3c35b9cb96b409205c85058,Metadata:&PodSandboxMetadata{Name:kube-proxy-bqn6c,Uid:2ee1682f-85b7-46f1-9cc4-840c7af8fbc4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710764272419063798,Labels:map[string]string{co
ntroller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bqn6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee1682f-85b7-46f1-9cc4-840c7af8fbc4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T12:17:51.488160011Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8a488bd236842e4d3bce85db9c4292d20f3d9cfd23b140e43c624dc5fd2fe0c1,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-015389,Uid:683d15c08eb6b312d4a70f3ee8c4f2b2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710764253022640835,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683d15c08eb6b312d4a70f3ee8c4f2b2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 683d15c08eb6b312d4
a70f3ee8c4f2b2,kubernetes.io/config.seen: 2024-03-18T12:17:32.535826743Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:35698080dd2b195bcb829a23eb3a5f68005db57869d47dd2000e755e15812da2,Metadata:&PodSandboxMetadata{Name:etcd-addons-015389,Uid:b22e282256b2bd7dc9c9090ca3f563b8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710764253010479584,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22e282256b2bd7dc9c9090ca3f563b8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.94:2379,kubernetes.io/config.hash: b22e282256b2bd7dc9c9090ca3f563b8,kubernetes.io/config.seen: 2024-03-18T12:17:32.535821131Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:78ec2e7118a48b2b963ed01cd80257dd11d2ed7d19e04ae5b3ec3d33c21b9156,Metadata:&PodSandboxMetadata{Name:kube-schedule
r-addons-015389,Uid:9f3de7ab5e16f155250eed40ede4a975,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710764253006240861,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3de7ab5e16f155250eed40ede4a975,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9f3de7ab5e16f155250eed40ede4a975,kubernetes.io/config.seen: 2024-03-18T12:17:32.535827628Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bf98862b4c37993cb091fa8121e3773e42037c6cf44b94032747817d241e5a7e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-015389,Uid:55af85acd15b46ba94e9b11d26cdde2d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710764253004967628,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-015389,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 55af85acd15b46ba94e9b11d26cdde2d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.94:8443,kubernetes.io/config.hash: 55af85acd15b46ba94e9b11d26cdde2d,kubernetes.io/config.seen: 2024-03-18T12:17:32.535825545Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a44702a8-be90-4eff-a297-91a63ce878d9 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.408434487Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710764559408408102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3b109e7d-125e-4bd8-979e-21481ba779ec name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.409628067Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da9cd1fc-2c23-40f7-9404-517556e6bcc4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.409686587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da9cd1fc-2c23-40f7-9404-517556e6bcc4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.411481084Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3c5608d878ac672802b358be490c3ae5cf3fdf187e6bd74a14e50da86bc97b1e,PodSandboxId:4229080d7df335d9ec9a36a76d94f57708abb369501170795d40e1a3434047b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710764551841539161,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2r7sr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e384ba7-781b-4ee6-bbdd-de3cffa849ed,},Annotations:map[string]string{io.kubernetes.container.hash: 534188a2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5820bd81ad28c91b5ba7f434605745180cffa17f458bf2c5c388a0bbe96e87f7,PodSandboxId:4344b5e5eb17500912cb8bb8669a52b13fc98b07ecde9b81dddc49695f8d210b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710764412544319609,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0fbd4ea0-863b-41bc-b038-eb32bc6f8df0,},Annotations:map[string]string{io.kubern
etes.container.hash: 8099d75f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b371753ecb3aa76edbf882342ff7d8555d465eb85b43f95007a42d689087b837,PodSandboxId:6d7ce0361ad64ce828fbd21d90198843c3ccd506e97a1d6f80f138d2607052d9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710764396230451932,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-dldvc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1f47d369-0ecf-4187-90d7-d83d291ad4c3,},Annotations:map[string]string{io.kubernetes.container.hash: b59bb08b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc05a6a42b5d10911a5db61b410f4b6e67e1e1dc7ddd9f013948da4d890265d1,PodSandboxId:db4ca2f319a4a5e325ea5623aabb981f64948a41142ef867f0921b06aee57baf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710764363173395656,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-kghrb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 55d99b90-d227-4668-ab36-92bbf71ea9f0,},Annotations:map[string]string{io.kubernetes.container.hash: 78044ea2,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfdffb41e64f6d5181487a690b5dea6d56d92674e7ef602814eacfde8e5b7c5,PodSandboxId:3a79aeaa103b355045c11cdf6db58b65cf9cc86856cf16ae7072696bc2aad64a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710764345072682794,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6rzxd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51f4af1b-be13-4f8f-bb8f-f2737d85d602,},Annotations:map[string]string{io.kubernetes.container.hash: 9781b340,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c9657ed8de70a2c8d74e6d6628be223ce0e73a8d183222590347e784b19bf4,PodSandboxId:88bb1d8bff1609a1abbabc2275742c891cf268223a929506de58d69762821ad1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710764343269624405,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x25c7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f404b63-071f-452e-91b6-0a4007f2b89a,},Annotations:map[string]string{io.kubernetes.container.hash: 24cb09b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67001736f9e27e452097a26ee8cf92c113205021973d10be01de47caf77dba46,PodSandboxId:16867d16e29fd84718b82c2bde5eeb73945ebe819c0d0b6561a378ab82ddd0b9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710764335337159056,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-c75dl,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 5d0ea5b5-2aa5-4352-8953-b4ecce9ad581,},Annotations:map[string]string{io.kubernetes.container.hash: a3df242b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb2efada5b1b355bd754c5df236c1171f3301509c50102fe5d9f6a25b64b,PodSandboxId:8d4844765f4a8e86cb83919cd92bdc71f9a3fcd5b30e36555e60bb20848251cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710764281669380107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 634e77d2-a06f-4449-809e-42ef7bf1fe64,},Annotations:map[string]string{io.kubernetes.container.hash: dff1e82a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7c094c527a00a92f194d91973efc4d02dd3535d3c8f1f304177af41bec5018,PodSandboxId:ae039dc14e9f49ea7be7f6c00dffa0fd2ee86e56925e9fe61085c3a3eb5f3f8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710764274891295802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57qjd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8186c1c0-2699-4797-9153-c88651831be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6083b549,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df388536a08a5c93bac16df4307c299808fda8df0f568fae9ae4b3955d5d172d,PodSandboxId:879ebcd0220cad7636760adeb246fb712fb109a2c3c35b9cb96b409205c8505
8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710764273004200447,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqn6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee1682f-85b7-46f1-9cc4-840c7af8fbc4,},Annotations:map[string]string{io.kubernetes.container.hash: 76e8c22e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1509538801fab51d6cdcfbdf80503764cae7ecef0c2169b333b24ede9602cb87,PodSandboxId:bf98862b4c37993cb091fa8121e3773e42037c6cf44b94032747817d241e5a7e,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710764253279232410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55af85acd15b46ba94e9b11d26cdde2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5548344a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d878af41b5d389e50498cf9a83fc55a97bd93ec897fc750d04641110033ee059,PodSandboxId:35698080dd2b195bcb829a23eb3a5f68005db57869d47dd2000e755e15812da2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,
},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710764253273940568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22e282256b2bd7dc9c9090ca3f563b8,},Annotations:map[string]string{io.kubernetes.container.hash: aa5dabf2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fafc688c73ab5f47a9f148a8e09338b4f1a3c2ac025c76adca9e408a620ea6,PodSandboxId:78ec2e7118a48b2b963ed01cd80257dd11d2ed7d19e04ae5b3ec3d33c21b9156,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc
065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710764253277492798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3de7ab5e16f155250eed40ede4a975,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2bf1c53c81fb29a2c2c548a15b825e5ab862b18e1e4ed0d5b0c462ea986a021,PodSandboxId:8a488bd236842e4d3bce85db9c4292d20f3d9cfd23b140e43c624dc5fd2fe0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d2
5e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710764253203448802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683d15c08eb6b312d4a70f3ee8c4f2b2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da9cd1fc-2c23-40f7-9404-517556e6bcc4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.419521423Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d348a6de-2a5a-4c6f-80ea-57a7c68a4071 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.420238961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d348a6de-2a5a-4c6f-80ea-57a7c68a4071 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.420509153Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3c5608d878ac672802b358be490c3ae5cf3fdf187e6bd74a14e50da86bc97b1e,PodSandboxId:4229080d7df335d9ec9a36a76d94f57708abb369501170795d40e1a3434047b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710764551841539161,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2r7sr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e384ba7-781b-4ee6-bbdd-de3cffa849ed,},Annotations:map[string]string{io.kubernetes.container.hash: 534188a2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5820bd81ad28c91b5ba7f434605745180cffa17f458bf2c5c388a0bbe96e87f7,PodSandboxId:4344b5e5eb17500912cb8bb8669a52b13fc98b07ecde9b81dddc49695f8d210b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710764412544319609,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0fbd4ea0-863b-41bc-b038-eb32bc6f8df0,},Annotations:map[string]string{io.kubern
etes.container.hash: 8099d75f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b371753ecb3aa76edbf882342ff7d8555d465eb85b43f95007a42d689087b837,PodSandboxId:6d7ce0361ad64ce828fbd21d90198843c3ccd506e97a1d6f80f138d2607052d9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710764396230451932,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-dldvc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1f47d369-0ecf-4187-90d7-d83d291ad4c3,},Annotations:map[string]string{io.kubernetes.container.hash: b59bb08b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc05a6a42b5d10911a5db61b410f4b6e67e1e1dc7ddd9f013948da4d890265d1,PodSandboxId:db4ca2f319a4a5e325ea5623aabb981f64948a41142ef867f0921b06aee57baf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710764363173395656,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-kghrb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 55d99b90-d227-4668-ab36-92bbf71ea9f0,},Annotations:map[string]string{io.kubernetes.container.hash: 78044ea2,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfdffb41e64f6d5181487a690b5dea6d56d92674e7ef602814eacfde8e5b7c5,PodSandboxId:3a79aeaa103b355045c11cdf6db58b65cf9cc86856cf16ae7072696bc2aad64a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710764345072682794,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6rzxd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51f4af1b-be13-4f8f-bb8f-f2737d85d602,},Annotations:map[string]string{io.kubernetes.container.hash: 9781b340,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c9657ed8de70a2c8d74e6d6628be223ce0e73a8d183222590347e784b19bf4,PodSandboxId:88bb1d8bff1609a1abbabc2275742c891cf268223a929506de58d69762821ad1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710764343269624405,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x25c7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f404b63-071f-452e-91b6-0a4007f2b89a,},Annotations:map[string]string{io.kubernetes.container.hash: 24cb09b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67001736f9e27e452097a26ee8cf92c113205021973d10be01de47caf77dba46,PodSandboxId:16867d16e29fd84718b82c2bde5eeb73945ebe819c0d0b6561a378ab82ddd0b9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710764335337159056,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-c75dl,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 5d0ea5b5-2aa5-4352-8953-b4ecce9ad581,},Annotations:map[string]string{io.kubernetes.container.hash: a3df242b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb2efada5b1b355bd754c5df236c1171f3301509c50102fe5d9f6a25b64b,PodSandboxId:8d4844765f4a8e86cb83919cd92bdc71f9a3fcd5b30e36555e60bb20848251cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710764281669380107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 634e77d2-a06f-4449-809e-42ef7bf1fe64,},Annotations:map[string]string{io.kubernetes.container.hash: dff1e82a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7c094c527a00a92f194d91973efc4d02dd3535d3c8f1f304177af41bec5018,PodSandboxId:ae039dc14e9f49ea7be7f6c00dffa0fd2ee86e56925e9fe61085c3a3eb5f3f8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710764274891295802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57qjd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8186c1c0-2699-4797-9153-c88651831be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6083b549,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df388536a08a5c93bac16df4307c299808fda8df0f568fae9ae4b3955d5d172d,PodSandboxId:879ebcd0220cad7636760adeb246fb712fb109a2c3c35b9cb96b409205c8505
8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710764273004200447,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqn6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee1682f-85b7-46f1-9cc4-840c7af8fbc4,},Annotations:map[string]string{io.kubernetes.container.hash: 76e8c22e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1509538801fab51d6cdcfbdf80503764cae7ecef0c2169b333b24ede9602cb87,PodSandboxId:bf98862b4c37993cb091fa8121e3773e42037c6cf44b94032747817d241e5a7e,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710764253279232410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55af85acd15b46ba94e9b11d26cdde2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5548344a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d878af41b5d389e50498cf9a83fc55a97bd93ec897fc750d04641110033ee059,PodSandboxId:35698080dd2b195bcb829a23eb3a5f68005db57869d47dd2000e755e15812da2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,
},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710764253273940568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22e282256b2bd7dc9c9090ca3f563b8,},Annotations:map[string]string{io.kubernetes.container.hash: aa5dabf2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fafc688c73ab5f47a9f148a8e09338b4f1a3c2ac025c76adca9e408a620ea6,PodSandboxId:78ec2e7118a48b2b963ed01cd80257dd11d2ed7d19e04ae5b3ec3d33c21b9156,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc
065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710764253277492798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3de7ab5e16f155250eed40ede4a975,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2bf1c53c81fb29a2c2c548a15b825e5ab862b18e1e4ed0d5b0c462ea986a021,PodSandboxId:8a488bd236842e4d3bce85db9c4292d20f3d9cfd23b140e43c624dc5fd2fe0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d2
5e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710764253203448802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683d15c08eb6b312d4a70f3ee8c4f2b2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d348a6de-2a5a-4c6f-80ea-57a7c68a4071 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.464641312Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fc6e22d2-f3ba-4f3c-8806-3503cfaa7f07 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.464718527Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fc6e22d2-f3ba-4f3c-8806-3503cfaa7f07 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.465970013Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3800322f-555a-41dd-b13f-9c2eff986596 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.467239343Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710764559467214642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3800322f-555a-41dd-b13f-9c2eff986596 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.467800849Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6c6875f7-ad40-4606-a3a4-48851bea0daa name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.467860825Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6c6875f7-ad40-4606-a3a4-48851bea0daa name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.468380931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3c5608d878ac672802b358be490c3ae5cf3fdf187e6bd74a14e50da86bc97b1e,PodSandboxId:4229080d7df335d9ec9a36a76d94f57708abb369501170795d40e1a3434047b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710764551841539161,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2r7sr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e384ba7-781b-4ee6-bbdd-de3cffa849ed,},Annotations:map[string]string{io.kubernetes.container.hash: 534188a2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5820bd81ad28c91b5ba7f434605745180cffa17f458bf2c5c388a0bbe96e87f7,PodSandboxId:4344b5e5eb17500912cb8bb8669a52b13fc98b07ecde9b81dddc49695f8d210b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710764412544319609,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0fbd4ea0-863b-41bc-b038-eb32bc6f8df0,},Annotations:map[string]string{io.kubern
etes.container.hash: 8099d75f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b371753ecb3aa76edbf882342ff7d8555d465eb85b43f95007a42d689087b837,PodSandboxId:6d7ce0361ad64ce828fbd21d90198843c3ccd506e97a1d6f80f138d2607052d9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710764396230451932,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-dldvc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1f47d369-0ecf-4187-90d7-d83d291ad4c3,},Annotations:map[string]string{io.kubernetes.container.hash: b59bb08b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc05a6a42b5d10911a5db61b410f4b6e67e1e1dc7ddd9f013948da4d890265d1,PodSandboxId:db4ca2f319a4a5e325ea5623aabb981f64948a41142ef867f0921b06aee57baf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710764363173395656,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-kghrb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 55d99b90-d227-4668-ab36-92bbf71ea9f0,},Annotations:map[string]string{io.kubernetes.container.hash: 78044ea2,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfdffb41e64f6d5181487a690b5dea6d56d92674e7ef602814eacfde8e5b7c5,PodSandboxId:3a79aeaa103b355045c11cdf6db58b65cf9cc86856cf16ae7072696bc2aad64a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710764345072682794,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6rzxd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51f4af1b-be13-4f8f-bb8f-f2737d85d602,},Annotations:map[string]string{io.kubernetes.container.hash: 9781b340,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c9657ed8de70a2c8d74e6d6628be223ce0e73a8d183222590347e784b19bf4,PodSandboxId:88bb1d8bff1609a1abbabc2275742c891cf268223a929506de58d69762821ad1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710764343269624405,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x25c7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f404b63-071f-452e-91b6-0a4007f2b89a,},Annotations:map[string]string{io.kubernetes.container.hash: 24cb09b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67001736f9e27e452097a26ee8cf92c113205021973d10be01de47caf77dba46,PodSandboxId:16867d16e29fd84718b82c2bde5eeb73945ebe819c0d0b6561a378ab82ddd0b9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710764335337159056,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-c75dl,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 5d0ea5b5-2aa5-4352-8953-b4ecce9ad581,},Annotations:map[string]string{io.kubernetes.container.hash: a3df242b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb2efada5b1b355bd754c5df236c1171f3301509c50102fe5d9f6a25b64b,PodSandboxId:8d4844765f4a8e86cb83919cd92bdc71f9a3fcd5b30e36555e60bb20848251cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710764281669380107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 634e77d2-a06f-4449-809e-42ef7bf1fe64,},Annotations:map[string]string{io.kubernetes.container.hash: dff1e82a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7c094c527a00a92f194d91973efc4d02dd3535d3c8f1f304177af41bec5018,PodSandboxId:ae039dc14e9f49ea7be7f6c00dffa0fd2ee86e56925e9fe61085c3a3eb5f3f8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710764274891295802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57qjd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8186c1c0-2699-4797-9153-c88651831be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6083b549,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df388536a08a5c93bac16df4307c299808fda8df0f568fae9ae4b3955d5d172d,PodSandboxId:879ebcd0220cad7636760adeb246fb712fb109a2c3c35b9cb96b409205c8505
8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710764273004200447,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqn6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee1682f-85b7-46f1-9cc4-840c7af8fbc4,},Annotations:map[string]string{io.kubernetes.container.hash: 76e8c22e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1509538801fab51d6cdcfbdf80503764cae7ecef0c2169b333b24ede9602cb87,PodSandboxId:bf98862b4c37993cb091fa8121e3773e42037c6cf44b94032747817d241e5a7e,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710764253279232410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55af85acd15b46ba94e9b11d26cdde2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5548344a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d878af41b5d389e50498cf9a83fc55a97bd93ec897fc750d04641110033ee059,PodSandboxId:35698080dd2b195bcb829a23eb3a5f68005db57869d47dd2000e755e15812da2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,
},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710764253273940568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22e282256b2bd7dc9c9090ca3f563b8,},Annotations:map[string]string{io.kubernetes.container.hash: aa5dabf2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fafc688c73ab5f47a9f148a8e09338b4f1a3c2ac025c76adca9e408a620ea6,PodSandboxId:78ec2e7118a48b2b963ed01cd80257dd11d2ed7d19e04ae5b3ec3d33c21b9156,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc
065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710764253277492798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3de7ab5e16f155250eed40ede4a975,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2bf1c53c81fb29a2c2c548a15b825e5ab862b18e1e4ed0d5b0c462ea986a021,PodSandboxId:8a488bd236842e4d3bce85db9c4292d20f3d9cfd23b140e43c624dc5fd2fe0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d2
5e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710764253203448802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683d15c08eb6b312d4a70f3ee8c4f2b2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6c6875f7-ad40-4606-a3a4-48851bea0daa name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.511200122Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8da3a3c5-56c6-48ea-8f3c-5661142a7d43 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.511297020Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8da3a3c5-56c6-48ea-8f3c-5661142a7d43 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.512672981Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1a094f5b-af18-4feb-ae46-e3ad9a60b6c7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.513883488Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710764559513859114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1a094f5b-af18-4feb-ae46-e3ad9a60b6c7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.514645504Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=207a1df1-9476-4734-89dd-261511da4c72 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.514700861Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=207a1df1-9476-4734-89dd-261511da4c72 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:22:39 addons-015389 crio[681]: time="2024-03-18 12:22:39.515104804Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:3c5608d878ac672802b358be490c3ae5cf3fdf187e6bd74a14e50da86bc97b1e,PodSandboxId:4229080d7df335d9ec9a36a76d94f57708abb369501170795d40e1a3434047b8,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710764551841539161,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-2r7sr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4e384ba7-781b-4ee6-bbdd-de3cffa849ed,},Annotations:map[string]string{io.kubernetes.container.hash: 534188a2,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5820bd81ad28c91b5ba7f434605745180cffa17f458bf2c5c388a0bbe96e87f7,PodSandboxId:4344b5e5eb17500912cb8bb8669a52b13fc98b07ecde9b81dddc49695f8d210b,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e289a478ace02cd72f0a71a5b2ec0594495e1fae85faa10aae3b0da530812608,State:CONTAINER_RUNNING,CreatedAt:1710764412544319609,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0fbd4ea0-863b-41bc-b038-eb32bc6f8df0,},Annotations:map[string]string{io.kubern
etes.container.hash: 8099d75f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b371753ecb3aa76edbf882342ff7d8555d465eb85b43f95007a42d689087b837,PodSandboxId:6d7ce0361ad64ce828fbd21d90198843c3ccd506e97a1d6f80f138d2607052d9,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710764396230451932,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-dldvc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1f47d369-0ecf-4187-90d7-d83d291ad4c3,},Annotations:map[string]string{io.kubernetes.container.hash: b59bb08b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc05a6a42b5d10911a5db61b410f4b6e67e1e1dc7ddd9f013948da4d890265d1,PodSandboxId:db4ca2f319a4a5e325ea5623aabb981f64948a41142ef867f0921b06aee57baf,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710764363173395656,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-kghrb,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 55d99b90-d227-4668-ab36-92bbf71ea9f0,},Annotations:map[string]string{io.kubernetes.container.hash: 78044ea2,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:abfdffb41e64f6d5181487a690b5dea6d56d92674e7ef602814eacfde8e5b7c5,PodSandboxId:3a79aeaa103b355045c11cdf6db58b65cf9cc86856cf16ae7072696bc2aad64a,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710764345072682794,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-6rzxd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 51f4af1b-be13-4f8f-bb8f-f2737d85d602,},Annotations:map[string]string{io.kubernetes.container.hash: 9781b340,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:48c9657ed8de70a2c8d74e6d6628be223ce0e73a8d183222590347e784b19bf4,PodSandboxId:88bb1d8bff1609a1abbabc2275742c891cf268223a929506de58d69762821ad1,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710764343269624405,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-x25c7,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 6f404b63-071f-452e-91b6-0a4007f2b89a,},Annotations:map[string]string{io.kubernetes.container.hash: 24cb09b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67001736f9e27e452097a26ee8cf92c113205021973d10be01de47caf77dba46,PodSandboxId:16867d16e29fd84718b82c2bde5eeb73945ebe819c0d0b6561a378ab82ddd0b9,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb
18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710764335337159056,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-c75dl,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 5d0ea5b5-2aa5-4352-8953-b4ecce9ad581,},Annotations:map[string]string{io.kubernetes.container.hash: a3df242b,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a5a0cb2efada5b1b355bd754c5df236c1171f3301509c50102fe5d9f6a25b64b,PodSandboxId:8d4844765f4a8e86cb83919cd92bdc71f9a3fcd5b30e36555e60bb20848251cb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]strin
g{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710764281669380107,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 634e77d2-a06f-4449-809e-42ef7bf1fe64,},Annotations:map[string]string{io.kubernetes.container.hash: dff1e82a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7c094c527a00a92f194d91973efc4d02dd3535d3c8f1f304177af41bec5018,PodSandboxId:ae039dc14e9f49ea7be7f6c00dffa0fd2ee86e56925e9fe61085c3a3eb5f3f8a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710764274891295802,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-57qjd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8186c1c0-2699-4797-9153-c88651831be4,},Annotations:map[string]string{io.kubernetes.container.hash: 6083b549,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:df388536a08a5c93bac16df4307c299808fda8df0f568fae9ae4b3955d5d172d,PodSandboxId:879ebcd0220cad7636760adeb246fb712fb109a2c3c35b9cb96b409205c8505
8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710764273004200447,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bqn6c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ee1682f-85b7-46f1-9cc4-840c7af8fbc4,},Annotations:map[string]string{io.kubernetes.container.hash: 76e8c22e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1509538801fab51d6cdcfbdf80503764cae7ecef0c2169b333b24ede9602cb87,PodSandboxId:bf98862b4c37993cb091fa8121e3773e42037c6cf44b94032747817d241e5a7e,Metadata:&ContainerMetadata{Nam
e:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710764253279232410,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 55af85acd15b46ba94e9b11d26cdde2d,},Annotations:map[string]string{io.kubernetes.container.hash: 5548344a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d878af41b5d389e50498cf9a83fc55a97bd93ec897fc750d04641110033ee059,PodSandboxId:35698080dd2b195bcb829a23eb3a5f68005db57869d47dd2000e755e15812da2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,
},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710764253273940568,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b22e282256b2bd7dc9c9090ca3f563b8,},Annotations:map[string]string{io.kubernetes.container.hash: aa5dabf2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b3fafc688c73ab5f47a9f148a8e09338b4f1a3c2ac025c76adca9e408a620ea6,PodSandboxId:78ec2e7118a48b2b963ed01cd80257dd11d2ed7d19e04ae5b3ec3d33c21b9156,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc
065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710764253277492798,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f3de7ab5e16f155250eed40ede4a975,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2bf1c53c81fb29a2c2c548a15b825e5ab862b18e1e4ed0d5b0c462ea986a021,PodSandboxId:8a488bd236842e4d3bce85db9c4292d20f3d9cfd23b140e43c624dc5fd2fe0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d2
5e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710764253203448802,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-015389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 683d15c08eb6b312d4a70f3ee8c4f2b2,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=207a1df1-9476-4734-89dd-261511da4c72 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3c5608d878ac6       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   4229080d7df33       hello-world-app-5d77478584-2r7sr
	5820bd81ad28c       docker.io/library/nginx@sha256:02d8d94023878cedf3e3acc55372932a9ba1478b6e2f3357786d916c2af743ba                              2 minutes ago       Running             nginx                     0                   4344b5e5eb175       nginx
	b371753ecb3aa       ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750                        2 minutes ago       Running             headlamp                  0                   6d7ce0361ad64       headlamp-5485c556b-dldvc
	fc05a6a42b5d1       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   db4ca2f319a4a       gcp-auth-7d69788767-kghrb
	abfdffb41e64f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              patch                     0                   3a79aeaa103b3       ingress-nginx-admission-patch-6rzxd
	48c9657ed8de7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   88bb1d8bff160       ingress-nginx-admission-create-x25c7
	67001736f9e27       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   16867d16e29fd       yakd-dashboard-9947fc6bf-c75dl
	a5a0cb2efada5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   8d4844765f4a8       storage-provisioner
	ff7c094c527a0       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   ae039dc14e9f4       coredns-5dd5756b68-57qjd
	df388536a08a5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   879ebcd0220ca       kube-proxy-bqn6c
	1509538801fab       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             5 minutes ago       Running             kube-apiserver            0                   bf98862b4c379       kube-apiserver-addons-015389
	b3fafc688c73a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             5 minutes ago       Running             kube-scheduler            0                   78ec2e7118a48       kube-scheduler-addons-015389
	d878af41b5d38       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             5 minutes ago       Running             etcd                      0                   35698080dd2b1       etcd-addons-015389
	e2bf1c53c81fb       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             5 minutes ago       Running             kube-controller-manager   0                   8a488bd236842       kube-controller-manager-addons-015389
	
	
	==> coredns [ff7c094c527a00a92f194d91973efc4d02dd3535d3c8f1f304177af41bec5018] <==
	[INFO] 10.244.0.7:55433 - 17186 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000242603s
	[INFO] 10.244.0.7:39297 - 13546 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000121863s
	[INFO] 10.244.0.7:39297 - 20969 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000108098s
	[INFO] 10.244.0.7:47450 - 10352 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115025s
	[INFO] 10.244.0.7:47450 - 47982 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000198494s
	[INFO] 10.244.0.7:48046 - 6751 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00018923s
	[INFO] 10.244.0.7:48046 - 10333 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00006473s
	[INFO] 10.244.0.7:45901 - 48596 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000051937s
	[INFO] 10.244.0.7:45901 - 4050 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000040675s
	[INFO] 10.244.0.7:39834 - 29630 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000096714s
	[INFO] 10.244.0.7:39834 - 35744 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000429414s
	[INFO] 10.244.0.7:46046 - 29596 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.0004077s
	[INFO] 10.244.0.7:46046 - 9114 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00031777s
	[INFO] 10.244.0.7:39078 - 8602 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00012324s
	[INFO] 10.244.0.7:39078 - 35480 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000071429s
	[INFO] 10.244.0.22:57170 - 59916 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000334901s
	[INFO] 10.244.0.22:55302 - 21270 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000069284s
	[INFO] 10.244.0.22:52020 - 39026 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00018366s
	[INFO] 10.244.0.22:36785 - 27562 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000060666s
	[INFO] 10.244.0.22:50965 - 9677 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000071392s
	[INFO] 10.244.0.22:47410 - 62323 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000171255s
	[INFO] 10.244.0.22:41377 - 58871 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001133017s
	[INFO] 10.244.0.22:55200 - 33101 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 382 0.001414099s
	[INFO] 10.244.0.24:41855 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000451277s
	[INFO] 10.244.0.24:60610 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000164808s
	
	
	==> describe nodes <==
	Name:               addons-015389
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-015389
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=addons-015389
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_17_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-015389
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:17:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-015389
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:22:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:20:13 +0000   Mon, 18 Mar 2024 12:17:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:20:13 +0000   Mon, 18 Mar 2024 12:17:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:20:13 +0000   Mon, 18 Mar 2024 12:17:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:20:13 +0000   Mon, 18 Mar 2024 12:17:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.94
	  Hostname:    addons-015389
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912792Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912792Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e5369e5c7814e4391d47ecd81ee295e
	  System UUID:                4e5369e5-c781-4e43-91d4-7ecd81ee295e
	  Boot ID:                    7d56e3d5-1a28-4e4a-8005-07d199721b22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-2r7sr         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  gcp-auth                    gcp-auth-7d69788767-kghrb                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  headlamp                    headlamp-5485c556b-dldvc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 coredns-5dd5756b68-57qjd                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m48s
	  kube-system                 etcd-addons-015389                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m
	  kube-system                 kube-apiserver-addons-015389             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 kube-controller-manager-addons-015389    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m2s
	  kube-system                 kube-proxy-bqn6c                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  kube-system                 kube-scheduler-addons-015389             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m39s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-c75dl           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m44s  kube-proxy       
	  Normal  Starting                 5m     kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m     kubelet          Node addons-015389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m     kubelet          Node addons-015389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m     kubelet          Node addons-015389 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m     kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m     kubelet          Node addons-015389 status is now: NodeReady
	  Normal  RegisteredNode           4m49s  node-controller  Node addons-015389 event: Registered Node addons-015389 in Controller
	
	
	==> dmesg <==
	[ +13.266860] systemd-fstab-generator[1478]: Ignoring "noauto" option for root device
	[  +0.182244] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.040961] kauditd_printk_skb: 78 callbacks suppressed
	[Mar18 12:18] kauditd_printk_skb: 128 callbacks suppressed
	[  +8.774354] kauditd_printk_skb: 76 callbacks suppressed
	[ +12.065256] kauditd_printk_skb: 6 callbacks suppressed
	[ +17.707082] kauditd_printk_skb: 4 callbacks suppressed
	[ +13.894386] kauditd_printk_skb: 23 callbacks suppressed
	[Mar18 12:19] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.911964] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.745838] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.411852] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.116102] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.168324] kauditd_printk_skb: 14 callbacks suppressed
	[  +7.375374] kauditd_printk_skb: 29 callbacks suppressed
	[  +5.025341] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.471893] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.623110] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.785240] kauditd_printk_skb: 25 callbacks suppressed
	[Mar18 12:20] kauditd_printk_skb: 23 callbacks suppressed
	[ +14.201393] kauditd_printk_skb: 6 callbacks suppressed
	[ +15.067548] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.556191] kauditd_printk_skb: 25 callbacks suppressed
	[Mar18 12:22] kauditd_printk_skb: 2 callbacks suppressed
	[  +6.150068] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [d878af41b5d389e50498cf9a83fc55a97bd93ec897fc750d04641110033ee059] <==
	{"level":"info","ts":"2024-03-18T12:19:17.92927Z","caller":"traceutil/trace.go:171","msg":"trace[188918788] range","detail":"{range_begin:/registry/masterleases/192.168.39.94; range_end:; response_count:1; response_revision:1101; }","duration":"246.43418ms","start":"2024-03-18T12:19:17.682829Z","end":"2024-03-18T12:19:17.929263Z","steps":["trace[188918788] 'agreement among raft nodes before linearized reading'  (duration: 246.350339ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:19:17.929412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"245.868945ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13721"}
	{"level":"info","ts":"2024-03-18T12:19:17.929534Z","caller":"traceutil/trace.go:171","msg":"trace[970342500] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1101; }","duration":"245.98889ms","start":"2024-03-18T12:19:17.683535Z","end":"2024-03-18T12:19:17.929524Z","steps":["trace[970342500] 'agreement among raft nodes before linearized reading'  (duration: 245.837343ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:19:17.930077Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"222.649736ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81493"}
	{"level":"info","ts":"2024-03-18T12:19:17.930138Z","caller":"traceutil/trace.go:171","msg":"trace[343340748] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1101; }","duration":"222.71477ms","start":"2024-03-18T12:19:17.707416Z","end":"2024-03-18T12:19:17.930131Z","steps":["trace[343340748] 'agreement among raft nodes before linearized reading'  (duration: 222.552506ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:19:17.930326Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"226.508162ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:499"}
	{"level":"info","ts":"2024-03-18T12:19:17.930391Z","caller":"traceutil/trace.go:171","msg":"trace[427466255] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1101; }","duration":"226.572116ms","start":"2024-03-18T12:19:17.70381Z","end":"2024-03-18T12:19:17.930383Z","steps":["trace[427466255] 'agreement among raft nodes before linearized reading'  (duration: 226.491674ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:19:27.543948Z","caller":"traceutil/trace.go:171","msg":"trace[434416478] linearizableReadLoop","detail":"{readStateIndex:1198; appliedIndex:1197; }","duration":"258.873896ms","start":"2024-03-18T12:19:27.285058Z","end":"2024-03-18T12:19:27.543932Z","steps":["trace[434416478] 'read index received'  (duration: 258.591085ms)","trace[434416478] 'applied index is now lower than readState.Index'  (duration: 281.923µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T12:19:27.544161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"259.090532ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T12:19:27.544225Z","caller":"traceutil/trace.go:171","msg":"trace[1686259205] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1159; }","duration":"259.178123ms","start":"2024-03-18T12:19:27.285036Z","end":"2024-03-18T12:19:27.544214Z","steps":["trace[1686259205] 'agreement among raft nodes before linearized reading'  (duration: 259.008149ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:19:27.544354Z","caller":"traceutil/trace.go:171","msg":"trace[381103391] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"305.571447ms","start":"2024-03-18T12:19:27.23877Z","end":"2024-03-18T12:19:27.544342Z","steps":["trace[381103391] 'process raft request'  (duration: 305.015768ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:19:27.544448Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:19:27.238746Z","time spent":"305.649716ms","remote":"127.0.0.1:44918","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":485,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" mod_revision:1122 > success:<request_put:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" value_size:426 >> failure:<request_range:<key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" > >"}
	{"level":"info","ts":"2024-03-18T12:19:32.731366Z","caller":"traceutil/trace.go:171","msg":"trace[1140657633] transaction","detail":"{read_only:false; response_revision:1186; number_of_response:1; }","duration":"106.10553ms","start":"2024-03-18T12:19:32.625235Z","end":"2024-03-18T12:19:32.731341Z","steps":["trace[1140657633] 'process raft request'  (duration: 105.978753ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:19:53.137091Z","caller":"traceutil/trace.go:171","msg":"trace[405339227] transaction","detail":"{read_only:false; response_revision:1402; number_of_response:1; }","duration":"105.090309ms","start":"2024-03-18T12:19:53.031972Z","end":"2024-03-18T12:19:53.137062Z","steps":["trace[405339227] 'process raft request'  (duration: 104.857518ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:19:56.069282Z","caller":"traceutil/trace.go:171","msg":"trace[1327683265] transaction","detail":"{read_only:false; response_revision:1411; number_of_response:1; }","duration":"330.378214ms","start":"2024-03-18T12:19:55.738888Z","end":"2024-03-18T12:19:56.069266Z","steps":["trace[1327683265] 'process raft request'  (duration: 330.128153ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:19:56.069787Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:19:55.738857Z","time spent":"330.651534ms","remote":"127.0.0.1:44808","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":995,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/persistentvolumeclaims/default/hpvc\" mod_revision:1175 > success:<request_put:<key:\"/registry/persistentvolumeclaims/default/hpvc\" value_size:942 >> failure:<request_range:<key:\"/registry/persistentvolumeclaims/default/hpvc\" > >"}
	{"level":"info","ts":"2024-03-18T12:19:56.076481Z","caller":"traceutil/trace.go:171","msg":"trace[1030419072] transaction","detail":"{read_only:false; response_revision:1412; number_of_response:1; }","duration":"337.1232ms","start":"2024-03-18T12:19:55.739347Z","end":"2024-03-18T12:19:56.07647Z","steps":["trace[1030419072] 'process raft request'  (duration: 337.059829ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:19:56.078312Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:19:55.739338Z","time spent":"338.919213ms","remote":"127.0.0.1:44918","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":539,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" mod_revision:1378 > success:<request_put:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" value_size:452 >> failure:<request_range:<key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" > >"}
	{"level":"info","ts":"2024-03-18T12:20:12.447362Z","caller":"traceutil/trace.go:171","msg":"trace[949255978] linearizableReadLoop","detail":"{readStateIndex:1586; appliedIndex:1585; }","duration":"172.161465ms","start":"2024-03-18T12:20:12.275178Z","end":"2024-03-18T12:20:12.44734Z","steps":["trace[949255978] 'read index received'  (duration: 172.011343ms)","trace[949255978] 'applied index is now lower than readState.Index'  (duration: 149.517µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T12:20:12.447659Z","caller":"traceutil/trace.go:171","msg":"trace[11948237] transaction","detail":"{read_only:false; response_revision:1529; number_of_response:1; }","duration":"408.810904ms","start":"2024-03-18T12:20:12.038832Z","end":"2024-03-18T12:20:12.447643Z","steps":["trace[11948237] 'process raft request'  (duration: 408.397115ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:20:12.448285Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:20:12.038816Z","time spent":"409.388033ms","remote":"127.0.0.1:44918","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":540,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-015389\" mod_revision:1475 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-015389\" value_size:486 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-015389\" > >"}
	{"level":"warn","ts":"2024-03-18T12:20:12.448472Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.07567ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T12:20:12.448606Z","caller":"traceutil/trace.go:171","msg":"trace[2030527651] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1529; }","duration":"166.151619ms","start":"2024-03-18T12:20:12.282388Z","end":"2024-03-18T12:20:12.44854Z","steps":["trace[2030527651] 'agreement among raft nodes before linearized reading'  (duration: 166.058266ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:20:12.447815Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"172.592849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-03-18T12:20:12.448766Z","caller":"traceutil/trace.go:171","msg":"trace[1519259082] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1529; }","duration":"173.619101ms","start":"2024-03-18T12:20:12.275127Z","end":"2024-03-18T12:20:12.448746Z","steps":["trace[1519259082] 'agreement among raft nodes before linearized reading'  (duration: 172.56335ms)"],"step_count":1}
	
	
	==> gcp-auth [fc05a6a42b5d10911a5db61b410f4b6e67e1e1dc7ddd9f013948da4d890265d1] <==
	2024/03/18 12:19:23 GCP Auth Webhook started!
	2024/03/18 12:19:37 Ready to marshal response ...
	2024/03/18 12:19:37 Ready to write response ...
	2024/03/18 12:19:37 Ready to marshal response ...
	2024/03/18 12:19:37 Ready to write response ...
	2024/03/18 12:19:41 Ready to marshal response ...
	2024/03/18 12:19:41 Ready to write response ...
	2024/03/18 12:19:48 Ready to marshal response ...
	2024/03/18 12:19:48 Ready to write response ...
	2024/03/18 12:19:49 Ready to marshal response ...
	2024/03/18 12:19:49 Ready to write response ...
	2024/03/18 12:19:49 Ready to marshal response ...
	2024/03/18 12:19:49 Ready to write response ...
	2024/03/18 12:19:50 Ready to marshal response ...
	2024/03/18 12:19:50 Ready to write response ...
	2024/03/18 12:19:53 Ready to marshal response ...
	2024/03/18 12:19:53 Ready to write response ...
	2024/03/18 12:19:56 Ready to marshal response ...
	2024/03/18 12:19:56 Ready to write response ...
	2024/03/18 12:20:02 Ready to marshal response ...
	2024/03/18 12:20:02 Ready to write response ...
	2024/03/18 12:20:29 Ready to marshal response ...
	2024/03/18 12:20:29 Ready to write response ...
	2024/03/18 12:22:28 Ready to marshal response ...
	2024/03/18 12:22:28 Ready to write response ...
	
	
	==> kernel <==
	 12:22:39 up 5 min,  0 users,  load average: 0.69, 1.23, 0.65
	Linux addons-015389 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1509538801fab51d6cdcfbdf80503764cae7ecef0c2169b333b24ede9602cb87] <==
	I0318 12:20:08.717282       1 handler.go:232] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0318 12:20:09.749813       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0318 12:20:14.723884       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0318 12:20:23.142524       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0318 12:20:45.458356       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:20:45.458642       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:20:45.472965       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:20:45.473199       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:20:45.483887       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:20:45.483952       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:20:45.502986       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:20:45.503984       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:20:45.511034       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:20:45.511101       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:20:45.518667       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:20:45.518727       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:20:45.532525       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:20:45.532676       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0318 12:20:45.537983       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0318 12:20:45.538049       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0318 12:20:45.593167       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"snapshot-controller\" not found]"
	W0318 12:20:46.512001       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0318 12:20:46.539089       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0318 12:20:46.556965       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0318 12:22:28.451169       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.67.215"}
	
	
	==> kube-controller-manager [e2bf1c53c81fb29a2c2c548a15b825e5ab862b18e1e4ed0d5b0c462ea986a021] <==
	W0318 12:21:21.843765       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:21:21.843796       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 12:21:49.753873       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:21:49.753979       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 12:21:54.496800       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:21:54.496937       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 12:21:58.227763       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:21:58.227919       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0318 12:22:14.901458       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:22:14.901518       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0318 12:22:28.244706       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0318 12:22:28.285712       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-2r7sr"
	I0318 12:22:28.300858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="55.453546ms"
	I0318 12:22:28.311059       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.778077ms"
	I0318 12:22:28.312083       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="117.1µs"
	I0318 12:22:28.328770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.118µs"
	W0318 12:22:30.210900       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:22:30.210936       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0318 12:22:31.485456       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0318 12:22:31.518114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="7.405µs"
	I0318 12:22:31.518160       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0318 12:22:32.394853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="10.088224ms"
	I0318 12:22:32.395087       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="61.108µs"
	W0318 12:22:33.592212       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0318 12:22:33.592268       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [df388536a08a5c93bac16df4307c299808fda8df0f568fae9ae4b3955d5d172d] <==
	I0318 12:17:54.163744       1 server_others.go:69] "Using iptables proxy"
	I0318 12:17:54.222516       1 node.go:141] Successfully retrieved node IP: 192.168.39.94
	I0318 12:17:55.170530       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:17:55.170602       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:17:55.300278       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:17:55.300320       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:17:55.300485       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:17:55.300493       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:17:55.334366       1 config.go:188] "Starting service config controller"
	I0318 12:17:55.334390       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:17:55.334409       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:17:55.334412       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:17:55.374117       1 config.go:315] "Starting node config controller"
	I0318 12:17:55.374128       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:17:55.643091       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:17:55.643140       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:17:55.643163       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [b3fafc688c73ab5f47a9f148a8e09338b4f1a3c2ac025c76adca9e408a620ea6] <==
	W0318 12:17:36.183511       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:17:36.183547       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:17:36.183638       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 12:17:36.183654       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 12:17:36.193934       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 12:17:36.193954       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 12:17:37.173507       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 12:17:37.173810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 12:17:37.290691       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 12:17:37.290772       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 12:17:37.304761       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:17:37.304815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 12:17:37.308118       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 12:17:37.308174       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 12:17:37.314268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 12:17:37.314326       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 12:17:37.348902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 12:17:37.349022       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 12:17:37.349290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:17:37.349474       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:17:37.403668       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 12:17:37.404116       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 12:17:37.417182       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 12:17:37.417431       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:17:39.568094       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 12:22:29 addons-015389 kubelet[1270]: I0318 12:22:29.502466    1270 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aadaefe-59f6-43bd-9893-7ddefbdf53dc-kube-api-access-vvd5t" (OuterVolumeSpecName: "kube-api-access-vvd5t") pod "7aadaefe-59f6-43bd-9893-7ddefbdf53dc" (UID: "7aadaefe-59f6-43bd-9893-7ddefbdf53dc"). InnerVolumeSpecName "kube-api-access-vvd5t". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 12:22:29 addons-015389 kubelet[1270]: I0318 12:22:29.600301    1270 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vvd5t\" (UniqueName: \"kubernetes.io/projected/7aadaefe-59f6-43bd-9893-7ddefbdf53dc-kube-api-access-vvd5t\") on node \"addons-015389\" DevicePath \"\""
	Mar 18 12:22:30 addons-015389 kubelet[1270]: I0318 12:22:30.351866    1270 scope.go:117] "RemoveContainer" containerID="c19361835d3b4d93dfaecf2803699b3f8628a71d0149503a31e8b8f615b7a0ba"
	Mar 18 12:22:30 addons-015389 kubelet[1270]: I0318 12:22:30.413195    1270 scope.go:117] "RemoveContainer" containerID="c19361835d3b4d93dfaecf2803699b3f8628a71d0149503a31e8b8f615b7a0ba"
	Mar 18 12:22:30 addons-015389 kubelet[1270]: E0318 12:22:30.414508    1270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19361835d3b4d93dfaecf2803699b3f8628a71d0149503a31e8b8f615b7a0ba\": container with ID starting with c19361835d3b4d93dfaecf2803699b3f8628a71d0149503a31e8b8f615b7a0ba not found: ID does not exist" containerID="c19361835d3b4d93dfaecf2803699b3f8628a71d0149503a31e8b8f615b7a0ba"
	Mar 18 12:22:30 addons-015389 kubelet[1270]: I0318 12:22:30.414669    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19361835d3b4d93dfaecf2803699b3f8628a71d0149503a31e8b8f615b7a0ba"} err="failed to get container status \"c19361835d3b4d93dfaecf2803699b3f8628a71d0149503a31e8b8f615b7a0ba\": rpc error: code = NotFound desc = could not find container \"c19361835d3b4d93dfaecf2803699b3f8628a71d0149503a31e8b8f615b7a0ba\": container with ID starting with c19361835d3b4d93dfaecf2803699b3f8628a71d0149503a31e8b8f615b7a0ba not found: ID does not exist"
	Mar 18 12:22:31 addons-015389 kubelet[1270]: I0318 12:22:31.264229    1270 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7aadaefe-59f6-43bd-9893-7ddefbdf53dc" path="/var/lib/kubelet/pods/7aadaefe-59f6-43bd-9893-7ddefbdf53dc/volumes"
	Mar 18 12:22:33 addons-015389 kubelet[1270]: I0318 12:22:33.231305    1270 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="51f4af1b-be13-4f8f-bb8f-f2737d85d602" path="/var/lib/kubelet/pods/51f4af1b-be13-4f8f-bb8f-f2737d85d602/volumes"
	Mar 18 12:22:33 addons-015389 kubelet[1270]: I0318 12:22:33.232116    1270 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6f404b63-071f-452e-91b6-0a4007f2b89a" path="/var/lib/kubelet/pods/6f404b63-071f-452e-91b6-0a4007f2b89a/volumes"
	Mar 18 12:22:34 addons-015389 kubelet[1270]: I0318 12:22:34.840886    1270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8hgg\" (UniqueName: \"kubernetes.io/projected/cdf11b41-bccc-436b-8992-e36f81c00621-kube-api-access-w8hgg\") pod \"cdf11b41-bccc-436b-8992-e36f81c00621\" (UID: \"cdf11b41-bccc-436b-8992-e36f81c00621\") "
	Mar 18 12:22:34 addons-015389 kubelet[1270]: I0318 12:22:34.840946    1270 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdf11b41-bccc-436b-8992-e36f81c00621-webhook-cert\") pod \"cdf11b41-bccc-436b-8992-e36f81c00621\" (UID: \"cdf11b41-bccc-436b-8992-e36f81c00621\") "
	Mar 18 12:22:34 addons-015389 kubelet[1270]: I0318 12:22:34.847217    1270 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf11b41-bccc-436b-8992-e36f81c00621-kube-api-access-w8hgg" (OuterVolumeSpecName: "kube-api-access-w8hgg") pod "cdf11b41-bccc-436b-8992-e36f81c00621" (UID: "cdf11b41-bccc-436b-8992-e36f81c00621"). InnerVolumeSpecName "kube-api-access-w8hgg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 12:22:34 addons-015389 kubelet[1270]: I0318 12:22:34.847371    1270 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf11b41-bccc-436b-8992-e36f81c00621-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "cdf11b41-bccc-436b-8992-e36f81c00621" (UID: "cdf11b41-bccc-436b-8992-e36f81c00621"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 18 12:22:34 addons-015389 kubelet[1270]: I0318 12:22:34.942078    1270 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-w8hgg\" (UniqueName: \"kubernetes.io/projected/cdf11b41-bccc-436b-8992-e36f81c00621-kube-api-access-w8hgg\") on node \"addons-015389\" DevicePath \"\""
	Mar 18 12:22:34 addons-015389 kubelet[1270]: I0318 12:22:34.942113    1270 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cdf11b41-bccc-436b-8992-e36f81c00621-webhook-cert\") on node \"addons-015389\" DevicePath \"\""
	Mar 18 12:22:35 addons-015389 kubelet[1270]: I0318 12:22:35.231900    1270 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cdf11b41-bccc-436b-8992-e36f81c00621" path="/var/lib/kubelet/pods/cdf11b41-bccc-436b-8992-e36f81c00621/volumes"
	Mar 18 12:22:35 addons-015389 kubelet[1270]: I0318 12:22:35.388698    1270 scope.go:117] "RemoveContainer" containerID="08fbf7d154566aaed6d2bf705dbbc60d5e984f4ce8aaf0264abbbaf25d08721a"
	Mar 18 12:22:35 addons-015389 kubelet[1270]: I0318 12:22:35.404267    1270 scope.go:117] "RemoveContainer" containerID="08fbf7d154566aaed6d2bf705dbbc60d5e984f4ce8aaf0264abbbaf25d08721a"
	Mar 18 12:22:35 addons-015389 kubelet[1270]: E0318 12:22:35.405228    1270 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08fbf7d154566aaed6d2bf705dbbc60d5e984f4ce8aaf0264abbbaf25d08721a\": container with ID starting with 08fbf7d154566aaed6d2bf705dbbc60d5e984f4ce8aaf0264abbbaf25d08721a not found: ID does not exist" containerID="08fbf7d154566aaed6d2bf705dbbc60d5e984f4ce8aaf0264abbbaf25d08721a"
	Mar 18 12:22:35 addons-015389 kubelet[1270]: I0318 12:22:35.405277    1270 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08fbf7d154566aaed6d2bf705dbbc60d5e984f4ce8aaf0264abbbaf25d08721a"} err="failed to get container status \"08fbf7d154566aaed6d2bf705dbbc60d5e984f4ce8aaf0264abbbaf25d08721a\": rpc error: code = NotFound desc = could not find container \"08fbf7d154566aaed6d2bf705dbbc60d5e984f4ce8aaf0264abbbaf25d08721a\": container with ID starting with 08fbf7d154566aaed6d2bf705dbbc60d5e984f4ce8aaf0264abbbaf25d08721a not found: ID does not exist"
	Mar 18 12:22:39 addons-015389 kubelet[1270]: E0318 12:22:39.296121    1270 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:22:39 addons-015389 kubelet[1270]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:22:39 addons-015389 kubelet[1270]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:22:39 addons-015389 kubelet[1270]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:22:39 addons-015389 kubelet[1270]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [a5a0cb2efada5b1b355bd754c5df236c1171f3301509c50102fe5d9f6a25b64b] <==
	I0318 12:18:02.188791       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 12:18:02.247875       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 12:18:02.248094       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 12:18:02.285441       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 12:18:02.286012       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0e975c4a-a39a-4dfd-8751-521f988fae3d", APIVersion:"v1", ResourceVersion:"681", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-015389_74da48ae-076b-4267-ba59-9e220ae91435 became leader
	I0318 12:18:02.291223       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-015389_74da48ae-076b-4267-ba59-9e220ae91435!
	I0318 12:18:02.410994       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-015389_74da48ae-076b-4267-ba59-9e220ae91435!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-015389 -n addons-015389
helpers_test.go:261: (dbg) Run:  kubectl --context addons-015389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (158.79s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.33s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-015389
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-015389: exit status 82 (2m0.488615584s)

                                                
                                                
-- stdout --
	* Stopping node "addons-015389"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-015389" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-015389
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-015389: exit status 11 (21.552757052s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-015389" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-015389
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-015389: exit status 11 (6.144556595s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-015389" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-015389
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-015389: exit status 11 (6.143579684s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.94:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-015389" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (4.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-377562 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-377562 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-377562 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-377562 --alsologtostderr -v=1] stderr:
I0318 12:41:53.828615 1124423 out.go:291] Setting OutFile to fd 1 ...
I0318 12:41:53.828919 1124423 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:53.828950 1124423 out.go:304] Setting ErrFile to fd 2...
I0318 12:41:53.828963 1124423 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:53.829253 1124423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
I0318 12:41:53.829592 1124423 mustload.go:65] Loading cluster: functional-377562
I0318 12:41:53.830090 1124423 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:53.830500 1124423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:53.830541 1124423 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:53.846773 1124423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39413
I0318 12:41:53.847472 1124423 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:53.848174 1124423 main.go:141] libmachine: Using API Version  1
I0318 12:41:53.848202 1124423 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:53.848577 1124423 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:53.848797 1124423 main.go:141] libmachine: (functional-377562) Calling .GetState
I0318 12:41:53.850628 1124423 host.go:66] Checking if "functional-377562" exists ...
I0318 12:41:53.851051 1124423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:53.851102 1124423 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:53.868111 1124423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46751
I0318 12:41:53.868670 1124423 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:53.869287 1124423 main.go:141] libmachine: Using API Version  1
I0318 12:41:53.869312 1124423 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:53.869678 1124423 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:53.869867 1124423 main.go:141] libmachine: (functional-377562) Calling .DriverName
I0318 12:41:53.870049 1124423 api_server.go:166] Checking apiserver status ...
I0318 12:41:53.870124 1124423 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0318 12:41:53.870159 1124423 main.go:141] libmachine: (functional-377562) Calling .GetSSHHostname
I0318 12:41:53.873470 1124423 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:53.873853 1124423 main.go:141] libmachine: (functional-377562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:00:d6", ip: ""} in network mk-functional-377562: {Iface:virbr1 ExpiryTime:2024-03-18 13:26:47 +0000 UTC Type:0 Mac:52:54:00:22:00:d6 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-377562 Clientid:01:52:54:00:22:00:d6}
I0318 12:41:53.873884 1124423 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined IP address 192.168.39.224 and MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:53.874058 1124423 main.go:141] libmachine: (functional-377562) Calling .GetSSHPort
I0318 12:41:53.874189 1124423 main.go:141] libmachine: (functional-377562) Calling .GetSSHKeyPath
I0318 12:41:53.874301 1124423 main.go:141] libmachine: (functional-377562) Calling .GetSSHUsername
I0318 12:41:53.874487 1124423 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/functional-377562/id_rsa Username:docker}
I0318 12:41:53.978367 1124423 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/14854/cgroup
W0318 12:41:53.997554 1124423 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/14854/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0318 12:41:53.997617 1124423 ssh_runner.go:195] Run: ls
I0318 12:41:54.007984 1124423 api_server.go:253] Checking apiserver healthz at https://192.168.39.224:8441/healthz ...
I0318 12:41:54.021259 1124423 api_server.go:279] https://192.168.39.224:8441/healthz returned 200:
ok
W0318 12:41:54.021305 1124423 out.go:239] * Enabling dashboard ...
* Enabling dashboard ...
I0318 12:41:54.021483 1124423 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:54.021502 1124423 addons.go:69] Setting dashboard=true in profile "functional-377562"
I0318 12:41:54.021509 1124423 addons.go:234] Setting addon dashboard=true in "functional-377562"
I0318 12:41:54.021536 1124423 host.go:66] Checking if "functional-377562" exists ...
I0318 12:41:54.021825 1124423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:54.021861 1124423 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:54.042671 1124423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
I0318 12:41:54.043208 1124423 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:54.043731 1124423 main.go:141] libmachine: Using API Version  1
I0318 12:41:54.043761 1124423 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:54.044205 1124423 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:54.044842 1124423 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:54.044898 1124423 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:54.061336 1124423 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46719
I0318 12:41:54.061867 1124423 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:54.062335 1124423 main.go:141] libmachine: Using API Version  1
I0318 12:41:54.062353 1124423 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:54.062663 1124423 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:54.062849 1124423 main.go:141] libmachine: (functional-377562) Calling .GetState
I0318 12:41:54.064555 1124423 main.go:141] libmachine: (functional-377562) Calling .DriverName
I0318 12:41:54.067071 1124423 out.go:177]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0318 12:41:54.068651 1124423 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0318 12:41:54.070042 1124423 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0318 12:41:54.070058 1124423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0318 12:41:54.070074 1124423 main.go:141] libmachine: (functional-377562) Calling .GetSSHHostname
I0318 12:41:54.072316 1124423 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:54.072699 1124423 main.go:141] libmachine: (functional-377562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:00:d6", ip: ""} in network mk-functional-377562: {Iface:virbr1 ExpiryTime:2024-03-18 13:26:47 +0000 UTC Type:0 Mac:52:54:00:22:00:d6 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-377562 Clientid:01:52:54:00:22:00:d6}
I0318 12:41:54.072739 1124423 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined IP address 192.168.39.224 and MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:54.072817 1124423 main.go:141] libmachine: (functional-377562) Calling .GetSSHPort
I0318 12:41:54.073026 1124423 main.go:141] libmachine: (functional-377562) Calling .GetSSHKeyPath
I0318 12:41:54.073161 1124423 main.go:141] libmachine: (functional-377562) Calling .GetSSHUsername
I0318 12:41:54.073297 1124423 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/functional-377562/id_rsa Username:docker}
I0318 12:41:54.180068 1124423 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0318 12:41:54.180099 1124423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0318 12:41:54.206280 1124423 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0318 12:41:54.206306 1124423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0318 12:41:54.232523 1124423 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0318 12:41:54.232547 1124423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0318 12:41:54.261710 1124423 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0318 12:41:54.261728 1124423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0318 12:41:54.285510 1124423 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
I0318 12:41:54.285539 1124423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0318 12:41:54.313615 1124423 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0318 12:41:54.313643 1124423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0318 12:41:54.331969 1124423 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0318 12:41:54.331995 1124423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0318 12:41:54.357810 1124423 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0318 12:41:54.357833 1124423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0318 12:41:54.387029 1124423 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0318 12:41:54.387054 1124423 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0318 12:41:54.416189 1124423 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0318 12:41:55.988688 1124423 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.572456884s)
I0318 12:41:55.988732 1124423 main.go:141] libmachine: Making call to close driver server
I0318 12:41:55.988742 1124423 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:41:55.989065 1124423 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:41:55.989089 1124423 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 12:41:55.989100 1124423 main.go:141] libmachine: Making call to close driver server
I0318 12:41:55.989109 1124423 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:41:55.989374 1124423 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:41:55.989390 1124423 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 12:41:55.991234 1124423 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-377562 addons enable metrics-server

                                                
                                                
I0318 12:41:55.992940 1124423 addons.go:197] Writing out "functional-377562" config to set dashboard=true...
W0318 12:41:55.993226 1124423 out.go:239] * Verifying dashboard health ...
* Verifying dashboard health ...
I0318 12:41:55.994302 1124423 kapi.go:59] client config for functional-377562: &rest.Config{Host:"https://192.168.39.224:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt", KeyFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.key", CAFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(n
il), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0318 12:41:56.008551 1124423 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  d970a401-7b5b-4712-9a67-32e74ce40687 631 0 2024-03-18 12:41:55 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2024-03-18 12:41:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.111.222.238,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.111.222.238],IPFamilies:[IPv4],AllocateLoadBalan
cerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0318 12:41:56.008725 1124423 out.go:239] * Launching proxy ...
* Launching proxy ...
I0318 12:41:56.008790 1124423 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-377562 proxy --port 36195]
I0318 12:41:56.009076 1124423 dashboard.go:157] Waiting for kubectl to output host:port ...
I0318 12:41:56.060264 1124423 out.go:177] 
W0318 12:41:56.061537 1124423 out.go:239] X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
X Exiting due to HOST_KUBECTL_PROXY: kubectl proxy: readByteWithTimeout: EOF
W0318 12:41:56.061553 1124423 out.go:239] * 
* 
W0318 12:41:56.066242 1124423 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_dashboard_2f9e80c8c4dc47927ad6915561a20c5705c3b3b4_0.log               │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0318 12:41:56.067825 1124423 out.go:177] 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-377562 -n functional-377562
helpers_test.go:244: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 logs -n 25: (1.584563962s)
helpers_test.go:252: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-377562 image rm                                                | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-377562                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-377562 image ls                                                | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	| image          | functional-377562 image load                                              | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-377562 image ls                                                | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	| image          | functional-377562 image save --daemon                                     | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-377562                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| start          | -p functional-377562                                                      | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC |                     |
	|                | --dry-run --memory                                                        |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                   |                   |         |         |                     |                     |
	|                | --driver=kvm2                                                             |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                   |         |         |                     |                     |
	| service        | functional-377562 service list                                            | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	| service        | functional-377562 service                                                 | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | hello-node-connect --url                                                  |                   |         |         |                     |                     |
	| mount          | -p functional-377562                                                      | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdany-port3942997611/001:/mount-9p       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                   |         |         |                     |                     |
	| ssh            | functional-377562 ssh findmnt                                             | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC |                     |
	|                | -T /mount-9p | grep 9p                                                    |                   |         |         |                     |                     |
	| service        | functional-377562 service list                                            | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | -o json                                                                   |                   |         |         |                     |                     |
	| start          | -p functional-377562                                                      | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC |                     |
	|                | --dry-run --memory                                                        |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                   |                   |         |         |                     |                     |
	|                | --driver=kvm2                                                             |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                   |         |         |                     |                     |
	| service        | functional-377562 service                                                 | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | --namespace=default --https                                               |                   |         |         |                     |                     |
	|                | --url hello-node                                                          |                   |         |         |                     |                     |
	| start          | -p functional-377562                                                      | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC |                     |
	|                | --dry-run --alsologtostderr                                               |                   |         |         |                     |                     |
	|                | -v=1 --driver=kvm2                                                        |                   |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                   |         |         |                     |                     |
	| ssh            | functional-377562 ssh findmnt                                             | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | -T /mount-9p | grep 9p                                                    |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                        | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC |                     |
	|                | -p functional-377562                                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                   |         |         |                     |                     |
	| service        | functional-377562                                                         | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | service hello-node --url                                                  |                   |         |         |                     |                     |
	|                | --format={{.IP}}                                                          |                   |         |         |                     |                     |
	| ssh            | functional-377562 ssh -- ls                                               | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | -la /mount-9p                                                             |                   |         |         |                     |                     |
	| ssh            | functional-377562 ssh cat                                                 | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | /mount-9p/test-1710765712934411624                                        |                   |         |         |                     |                     |
	| service        | functional-377562 service                                                 | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | hello-node --url                                                          |                   |         |         |                     |                     |
	| update-context | functional-377562                                                         | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| update-context | functional-377562                                                         | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| update-context | functional-377562                                                         | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | update-context                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                   |         |         |                     |                     |
	| image          | functional-377562                                                         | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC | 18 Mar 24 12:41 UTC |
	|                | image ls --format short                                                   |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	| image          | functional-377562                                                         | functional-377562 | jenkins | v1.32.0 | 18 Mar 24 12:41 UTC |                     |
	|                | image ls --format yaml                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                   |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:41:53
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:41:53.663370 1124354 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:41:53.663642 1124354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:41:53.663653 1124354 out.go:304] Setting ErrFile to fd 2...
	I0318 12:41:53.663659 1124354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:41:53.663853 1124354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:41:53.664417 1124354 out.go:298] Setting JSON to false
	I0318 12:41:53.665377 1124354 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":15861,"bootTime":1710749853,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:41:53.665444 1124354 start.go:139] virtualization: kvm guest
	I0318 12:41:53.667743 1124354 out.go:177] * [functional-377562] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:41:53.669464 1124354 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 12:41:53.669525 1124354 notify.go:220] Checking for updates...
	I0318 12:41:53.673406 1124354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:41:53.675001 1124354 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:41:53.676573 1124354 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:41:53.678320 1124354 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 12:41:53.679957 1124354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:41:53.682184 1124354 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:41:53.682965 1124354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:41:53.683035 1124354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:41:53.701396 1124354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39197
	I0318 12:41:53.701821 1124354 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:41:53.702428 1124354 main.go:141] libmachine: Using API Version  1
	I0318 12:41:53.702455 1124354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:41:53.702815 1124354 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:41:53.703003 1124354 main.go:141] libmachine: (functional-377562) Calling .DriverName
	I0318 12:41:53.703338 1124354 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:41:53.703774 1124354 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:41:53.703858 1124354 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:41:53.719923 1124354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0318 12:41:53.720351 1124354 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:41:53.720813 1124354 main.go:141] libmachine: Using API Version  1
	I0318 12:41:53.720860 1124354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:41:53.721202 1124354 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:41:53.721409 1124354 main.go:141] libmachine: (functional-377562) Calling .DriverName
	I0318 12:41:53.757392 1124354 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 12:41:53.758750 1124354 start.go:297] selected driver: kvm2
	I0318 12:41:53.758780 1124354 start.go:901] validating driver "kvm2" against &{Name:functional-377562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-377562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:41:53.758927 1124354 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:41:53.760218 1124354 cni.go:84] Creating CNI manager for ""
	I0318 12:41:53.760238 1124354 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:41:53.760284 1124354 start.go:340] cluster config:
	{Name:functional-377562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:functional-377562 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:41:53.762141 1124354 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.135938960Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710765717135830633,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:188429,},InodesUsed:&UInt64Value{Value:91,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c539c3d-a791-4f34-859f-c70ccd7750c4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.137216695Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6986aad3-800c-419c-80c4-3518d01dfa84 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.137271728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6986aad3-800c-419c-80c4-3518d01dfa84 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.137653601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c27f6e3fdb3a0d2a4d593b2b634884583fccb0c8363dc62c7abeab64e6bc3827,PodSandboxId:e536c1087838c9f1618654167b0460c2539e6759be9984a2ecdd3e7dcdc8a2e2,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710765705430668808,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-d7447cc7f-tj5fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 397bcd4a-c19a-4699-8592-38466b9f477d,},Annotations:map[string]string{io.kubernetes.container.hash: 275a8be2,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66c32aecaf581f657ed10653ab1eff3eedd3baf543dc24f0f10f1c02eab45548,PodSandboxId:5c4fdc16488dcdc666475f6cb9a46728ab6d38689729267e51cf2aea57f3a83c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710765705356850691,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-55497b8b78-vwqbg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 70bcdcd5-960f-4c1a-89fa-2cbebecf47a0,},Annotations:map[string]string{io.kubernetes.container.hash: 66e1c0e1,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6139bc346f682438a891d2f3c1ebf0fd3663b35f10c0ac64ac1b3c4ae004eb1,PodSandboxId:1069b7398ab2d963a2d464818feb5131759e922c7e924e5066cebba6228f91fc,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1710765700766685976,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-859648c796-jqn8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4eed63c8-b398-4c01-b45f-38022afbc70e,},Annotations:map[string]string{io.kubernetes.container.hash: d5fae93c,io.kubernetes.container.ports:
[{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5fb186fabe31c0de5ea5591df8f741c31d7d037ffcf8f6fb365bffa013ea96,PodSandboxId:47f6ceec4561a2efaf968adaeec884a4772519c58bf907325682c058cf7f4641,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765675462691759,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e8be6e-113b-4cba-b714-60d76790deb9,},Annotations:ma
p[string]string{io.kubernetes.container.hash: dd6aa9cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a5589af46aad09c310528042a423ed639df2c7002fe62dd3cc5e46113c2bed,PodSandboxId:cc6567ef95b2e9907a1b2c000cc9dca1f876c48938ab5635ee251dcde64305df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765673830069205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-446fd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc9153b-d6bf-4b1b-b019-e4a06bb9c47f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: e567fa3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23c2cd50a2744fd4011997b06e52de7c72dfe891b82981ce0549f9fdf83c3cf5,PodSandboxId:c7264a52342c132834943bc40e9ea53d60b7ebc53046e7d4587e552d2be23075,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765673688265000,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-5dd5756b68-hrhv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6041e7-d563-4ceb-9434-798f5a4fcd6e,},Annotations:map[string]string{io.kubernetes.container.hash: eeeb641d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa1075b379b829ecbc71ffb5b0a8af235d3736bab053a850429e27a9c70596,PodSandboxId:9d7d9f73e11f7a893175a2e5a0a13789b59065509809d3ceacc96bf33e8d18ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765673428709754,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfws7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f847216-d020-4b04-b0dc-3f3effff9b75,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfb27f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3c68d452b160136cc192cc57dd41dc8aa6e63df46c64307593da705bffe32d,PodSandboxId:f9c6bc5864ea22fc7a48b038daadd27cb69d64bf4a0cd747ec36f07e91a9aa55,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a416745
5f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765653554809698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfeb5f635d63e512a04a9d99fa12e4b,},Annotations:map[string]string{io.kubernetes.container.hash: cc8f32fe,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfcc1206e1c5bc5ef01f6bfe29f1a64aae81c5059b512acbfc4651c0133fff1,PodSandboxId:ca29c64788d785050ae0767ccd7c60a320d5c954ba8fdd24fe5f61fb1c04a820,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9
d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765653563545025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ea3ab1da74fd4b0e011ead867a2a285,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e156c3263945752d6339e3aeab0ab5d7710fb5b2b51829b6da60b36849734a,PodSandboxId:ba47300c6565ba5dbad267c1b8a8d2950f1fbbe99a639f29ac0139c5d12f0e5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2
b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765653553414747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb8bca7751a60f86c770225912ad90d,},Annotations:map[string]string{io.kubernetes.container.hash: fe3637b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06820a4737c770b9af163611152684dd3e23dfac652bbf10dbfeb45d45d77999,PodSandboxId:b62ff5baac8a0eb93dcaa9f52a3ac2de1ddb491122d179dec2052a1820b4dad3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:6,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109
c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765653445230731,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668a9ab0cd5c6df359f3b13dc0c482d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6986aad3-800c-419c-80c4-3518d01dfa84 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.190128768Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.v2+json\"" file="docker/docker_client.go:964"
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.190350116Z" level=debug msg="IsRunningImageAllowed for image docker:gcr.io/k8s-minikube/busybox:1.28.4-glibc" file="signature/policy_eval.go:274"
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.190387921Z" level=debug msg=" Using default policy section" file="signature/policy_eval.go:162"
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.190417991Z" level=debug msg=" Requirement 0: allowed" file="signature/policy_eval.go:288"
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.190438982Z" level=debug msg="Overall: allowed" file="signature/policy_eval.go:291"
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.190515070Z" level=debug msg="Downloading /v2/k8s-minikube/busybox/blobs/sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c" file="docker/docker_client.go:1038"
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.190582842Z" level=debug msg="GET https://gcr.io/v2/k8s-minikube/busybox/blobs/sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c" file="docker/docker_client.go:631"
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.201142899Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90c0c5b6-cf81-4fc3-81c5-a7c7791177ae name=/runtime.v1.RuntimeService/Version
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.201237617Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90c0c5b6-cf81-4fc3-81c5-a7c7791177ae name=/runtime.v1.RuntimeService/Version
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.208847032Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9b2ff514-fe35-4488-8fb9-3bc5a6db21c9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.210110823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710765717210079114,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:188429,},InodesUsed:&UInt64Value{Value:91,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9b2ff514-fe35-4488-8fb9-3bc5a6db21c9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.211648690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=622334ce-b19c-45d9-a1a0-663f82f1e769 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.211895218Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=622334ce-b19c-45d9-a1a0-663f82f1e769 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.213064592Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c27f6e3fdb3a0d2a4d593b2b634884583fccb0c8363dc62c7abeab64e6bc3827,PodSandboxId:e536c1087838c9f1618654167b0460c2539e6759be9984a2ecdd3e7dcdc8a2e2,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710765705430668808,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-d7447cc7f-tj5fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 397bcd4a-c19a-4699-8592-38466b9f477d,},Annotations:map[string]string{io.kubernetes.container.hash: 275a8be2,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66c32aecaf581f657ed10653ab1eff3eedd3baf543dc24f0f10f1c02eab45548,PodSandboxId:5c4fdc16488dcdc666475f6cb9a46728ab6d38689729267e51cf2aea57f3a83c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710765705356850691,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-55497b8b78-vwqbg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 70bcdcd5-960f-4c1a-89fa-2cbebecf47a0,},Annotations:map[string]string{io.kubernetes.container.hash: 66e1c0e1,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6139bc346f682438a891d2f3c1ebf0fd3663b35f10c0ac64ac1b3c4ae004eb1,PodSandboxId:1069b7398ab2d963a2d464818feb5131759e922c7e924e5066cebba6228f91fc,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1710765700766685976,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-859648c796-jqn8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4eed63c8-b398-4c01-b45f-38022afbc70e,},Annotations:map[string]string{io.kubernetes.container.hash: d5fae93c,io.kubernetes.container.ports:
[{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5fb186fabe31c0de5ea5591df8f741c31d7d037ffcf8f6fb365bffa013ea96,PodSandboxId:47f6ceec4561a2efaf968adaeec884a4772519c58bf907325682c058cf7f4641,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765675462691759,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e8be6e-113b-4cba-b714-60d76790deb9,},Annotations:ma
p[string]string{io.kubernetes.container.hash: dd6aa9cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a5589af46aad09c310528042a423ed639df2c7002fe62dd3cc5e46113c2bed,PodSandboxId:cc6567ef95b2e9907a1b2c000cc9dca1f876c48938ab5635ee251dcde64305df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765673830069205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-446fd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc9153b-d6bf-4b1b-b019-e4a06bb9c47f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: e567fa3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23c2cd50a2744fd4011997b06e52de7c72dfe891b82981ce0549f9fdf83c3cf5,PodSandboxId:c7264a52342c132834943bc40e9ea53d60b7ebc53046e7d4587e552d2be23075,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765673688265000,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-5dd5756b68-hrhv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6041e7-d563-4ceb-9434-798f5a4fcd6e,},Annotations:map[string]string{io.kubernetes.container.hash: eeeb641d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa1075b379b829ecbc71ffb5b0a8af235d3736bab053a850429e27a9c70596,PodSandboxId:9d7d9f73e11f7a893175a2e5a0a13789b59065509809d3ceacc96bf33e8d18ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765673428709754,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfws7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f847216-d020-4b04-b0dc-3f3effff9b75,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfb27f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3c68d452b160136cc192cc57dd41dc8aa6e63df46c64307593da705bffe32d,PodSandboxId:f9c6bc5864ea22fc7a48b038daadd27cb69d64bf4a0cd747ec36f07e91a9aa55,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a416745
5f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765653554809698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfeb5f635d63e512a04a9d99fa12e4b,},Annotations:map[string]string{io.kubernetes.container.hash: cc8f32fe,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfcc1206e1c5bc5ef01f6bfe29f1a64aae81c5059b512acbfc4651c0133fff1,PodSandboxId:ca29c64788d785050ae0767ccd7c60a320d5c954ba8fdd24fe5f61fb1c04a820,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9
d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765653563545025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ea3ab1da74fd4b0e011ead867a2a285,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e156c3263945752d6339e3aeab0ab5d7710fb5b2b51829b6da60b36849734a,PodSandboxId:ba47300c6565ba5dbad267c1b8a8d2950f1fbbe99a639f29ac0139c5d12f0e5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2
b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765653553414747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb8bca7751a60f86c770225912ad90d,},Annotations:map[string]string{io.kubernetes.container.hash: fe3637b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06820a4737c770b9af163611152684dd3e23dfac652bbf10dbfeb45d45d77999,PodSandboxId:b62ff5baac8a0eb93dcaa9f52a3ac2de1ddb491122d179dec2052a1820b4dad3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:6,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109
c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765653445230731,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668a9ab0cd5c6df359f3b13dc0c482d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=622334ce-b19c-45d9-a1a0-663f82f1e769 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.280611489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e23fef03-ff25-4252-b169-a5141e0f5ffc name=/runtime.v1.RuntimeService/Version
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.280682797Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e23fef03-ff25-4252-b169-a5141e0f5ffc name=/runtime.v1.RuntimeService/Version
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.287949254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=536a82ef-f2ce-411a-a9d2-ea905a52ef17 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.288650089Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710765717288620435,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:188429,},InodesUsed:&UInt64Value{Value:91,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=536a82ef-f2ce-411a-a9d2-ea905a52ef17 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.289256161Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=48700f03-8832-4daa-a8a4-4c0c04751cf1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.289413302Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=48700f03-8832-4daa-a8a4-4c0c04751cf1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:41:57 functional-377562 crio[12097]: time="2024-03-18 12:41:57.289687281Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c27f6e3fdb3a0d2a4d593b2b634884583fccb0c8363dc62c7abeab64e6bc3827,PodSandboxId:e536c1087838c9f1618654167b0460c2539e6759be9984a2ecdd3e7dcdc8a2e2,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710765705430668808,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-d7447cc7f-tj5fw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 397bcd4a-c19a-4699-8592-38466b9f477d,},Annotations:map[string]string{io.kubernetes.container.hash: 275a8be2,io.kubernetes.container.restartCount:
0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66c32aecaf581f657ed10653ab1eff3eedd3baf543dc24f0f10f1c02eab45548,PodSandboxId:5c4fdc16488dcdc666475f6cb9a46728ab6d38689729267e51cf2aea57f3a83c,Metadata:&ContainerMetadata{Name:echoserver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410,State:CONTAINER_RUNNING,CreatedAt:1710765705356850691,Labels:map[string]string{io.kubernetes.container.name: echoserver,io.kubernetes.pod.name: hello-node-connect-55497b8b78-vwqbg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 70bcdcd5-960f-4c1a-89fa-2cbebecf47a0,},Annotations:map[string]string{io.kubernetes.container.hash: 66e1c0e1,io.kubernetes.contain
er.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6139bc346f682438a891d2f3c1ebf0fd3663b35f10c0ac64ac1b3c4ae004eb1,PodSandboxId:1069b7398ab2d963a2d464818feb5131759e922c7e924e5066cebba6228f91fc,Metadata:&ContainerMetadata{Name:mysql,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933,State:CONTAINER_RUNNING,CreatedAt:1710765700766685976,Labels:map[string]string{io.kubernetes.container.name: mysql,io.kubernetes.pod.name: mysql-859648c796-jqn8r,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 4eed63c8-b398-4c01-b45f-38022afbc70e,},Annotations:map[string]string{io.kubernetes.container.hash: d5fae93c,io.kubernetes.container.ports:
[{\"name\":\"mysql\",\"containerPort\":3306,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7a5fb186fabe31c0de5ea5591df8f741c31d7d037ffcf8f6fb365bffa013ea96,PodSandboxId:47f6ceec4561a2efaf968adaeec884a4772519c58bf907325682c058cf7f4641,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765675462691759,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1e8be6e-113b-4cba-b714-60d76790deb9,},Annotations:ma
p[string]string{io.kubernetes.container.hash: dd6aa9cb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a5589af46aad09c310528042a423ed639df2c7002fe62dd3cc5e46113c2bed,PodSandboxId:cc6567ef95b2e9907a1b2c000cc9dca1f876c48938ab5635ee251dcde64305df,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765673830069205,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-446fd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc9153b-d6bf-4b1b-b019-e4a06bb9c47f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: e567fa3c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23c2cd50a2744fd4011997b06e52de7c72dfe891b82981ce0549f9fdf83c3cf5,PodSandboxId:c7264a52342c132834943bc40e9ea53d60b7ebc53046e7d4587e552d2be23075,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765673688265000,Labels:map[string]string{io.kubernetes.container.name: coredns,io
.kubernetes.pod.name: coredns-5dd5756b68-hrhv8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a6041e7-d563-4ceb-9434-798f5a4fcd6e,},Annotations:map[string]string{io.kubernetes.container.hash: eeeb641d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:93aa1075b379b829ecbc71ffb5b0a8af235d3736bab053a850429e27a9c70596,PodSandboxId:9d7d9f73e11f7a893175a2e5a0a13789b59065509809d3ceacc96bf33e8d18ff,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandle
r:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765673428709754,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mfws7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f847216-d020-4b04-b0dc-3f3effff9b75,},Annotations:map[string]string{io.kubernetes.container.hash: 9dfb27f1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c3c68d452b160136cc192cc57dd41dc8aa6e63df46c64307593da705bffe32d,PodSandboxId:f9c6bc5864ea22fc7a48b038daadd27cb69d64bf4a0cd747ec36f07e91a9aa55,Metadata:&ContainerMetadata{Name:etcd,Attempt:4,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a416745
5f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765653554809698,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ecfeb5f635d63e512a04a9d99fa12e4b,},Annotations:map[string]string{io.kubernetes.container.hash: cc8f32fe,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0cfcc1206e1c5bc5ef01f6bfe29f1a64aae81c5059b512acbfc4651c0133fff1,PodSandboxId:ca29c64788d785050ae0767ccd7c60a320d5c954ba8fdd24fe5f61fb1c04a820,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:9,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9
d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765653563545025,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0ea3ab1da74fd4b0e011ead867a2a285,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 9,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e156c3263945752d6339e3aeab0ab5d7710fb5b2b51829b6da60b36849734a,PodSandboxId:ba47300c6565ba5dbad267c1b8a8d2950f1fbbe99a639f29ac0139c5d12f0e5d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2
b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765653553414747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: edb8bca7751a60f86c770225912ad90d,},Annotations:map[string]string{io.kubernetes.container.hash: fe3637b3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06820a4737c770b9af163611152684dd3e23dfac652bbf10dbfeb45d45d77999,PodSandboxId:b62ff5baac8a0eb93dcaa9f52a3ac2de1ddb491122d179dec2052a1820b4dad3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:6,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109
c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765653445230731,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-functional-377562,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 668a9ab0cd5c6df359f3b13dc0c482d2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 6,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=48700f03-8832-4daa-a8a4-4c0c04751cf1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c27f6e3fdb3a0       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   11 seconds ago       Running             echoserver                0                   e536c1087838c       hello-node-d7447cc7f-tj5fw
	66c32aecaf581       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969   12 seconds ago       Running             echoserver                0                   5c4fdc16488dc       hello-node-connect-55497b8b78-vwqbg
	c6139bc346f68       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb      16 seconds ago       Running             mysql                     0                   1069b7398ab2d       mysql-859648c796-jqn8r
	7a5fb186fabe3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                     41 seconds ago       Running             storage-provisioner       0                   47f6ceec4561a       storage-provisioner
	05a5589af46aa       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                     43 seconds ago       Running             coredns                   0                   cc6567ef95b2e       coredns-5dd5756b68-446fd
	23c2cd50a2744       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                     43 seconds ago       Running             coredns                   0                   c7264a52342c1       coredns-5dd5756b68-hrhv8
	93aa1075b379b       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                     43 seconds ago       Running             kube-proxy                0                   9d7d9f73e11f7       kube-proxy-mfws7
	0cfcc1206e1c5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                     About a minute ago   Running             kube-controller-manager   9                   ca29c64788d78       kube-controller-manager-functional-377562
	4c3c68d452b16       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                     About a minute ago   Running             etcd                      4                   f9c6bc5864ea2       etcd-functional-377562
	78e156c326394       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                     About a minute ago   Running             kube-apiserver            1                   ba47300c6565b       kube-apiserver-functional-377562
	06820a4737c77       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                     About a minute ago   Running             kube-scheduler            6                   b62ff5baac8a0       kube-scheduler-functional-377562
	
	
	==> coredns [05a5589af46aad09c310528042a423ed639df2c7002fe62dd3cc5e46113c2bed] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [23c2cd50a2744fd4011997b06e52de7c72dfe891b82981ce0549f9fdf83c3cf5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               functional-377562
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-377562
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=functional-377562
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_40_59_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:40:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-377562
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:41:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:41:20 +0000   Mon, 18 Mar 2024 12:40:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:41:20 +0000   Mon, 18 Mar 2024 12:40:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:41:20 +0000   Mon, 18 Mar 2024 12:40:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:41:20 +0000   Mon, 18 Mar 2024 12:41:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    functional-377562
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912792Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912792Ki
	  pods:               110
	System Info:
	  Machine ID:                 48b327ada65a4fa990601b4689a628b5
	  System UUID:                48b327ad-a65a-4fa9-9060-1b4689a628b5
	  Boot ID:                    ed2f21c6-2bae-40d2-8672-20738c86f94b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox-mount                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3s
	  default                     hello-node-connect-55497b8b78-vwqbg           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         31s
	  default                     hello-node-d7447cc7f-tj5fw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  default                     mysql-859648c796-jqn8r                        600m (30%!)(MISSING)    700m (35%!)(MISSING)  512Mi (13%!)(MISSING)      700Mi (18%!)(MISSING)    33s
	  default                     sp-pod                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         0s
	  kube-system                 coredns-5dd5756b68-446fd                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     44s
	  kube-system                 coredns-5dd5756b68-hrhv8                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     45s
	  kube-system                 etcd-functional-377562                        100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         58s
	  kube-system                 kube-apiserver-functional-377562              250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 kube-controller-manager-functional-377562     200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         57s
	  kube-system                 kube-proxy-mfws7                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                 kube-scheduler-functional-377562              100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         58s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kubernetes-dashboard        dashboard-metrics-scraper-7fd5cb4ddc-7s4zx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-l2ftg         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (72%!)(MISSING)  700m (35%!)(MISSING)
	  memory             752Mi (19%!)(MISSING)  1040Mi (27%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 43s                kube-proxy       
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node functional-377562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node functional-377562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x7 over 65s)  kubelet          Node functional-377562 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  65s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s                kubelet          Node functional-377562 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s                kubelet          Node functional-377562 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s                kubelet          Node functional-377562 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             58s                kubelet          Node functional-377562 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  58s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                47s                kubelet          Node functional-377562 status is now: NodeReady
	  Normal  RegisteredNode           45s                node-controller  Node functional-377562 event: Registered Node functional-377562 in Controller
	
	
	==> dmesg <==
	[Mar18 12:33] systemd-fstab-generator[9609]: Ignoring "noauto" option for root device
	[Mar18 12:34] systemd-fstab-generator[9932]: Ignoring "noauto" option for root device
	[  +0.087546] kauditd_printk_skb: 72 callbacks suppressed
	[ +12.821315] systemd-fstab-generator[10127]: Ignoring "noauto" option for root device
	[  +0.100669] kauditd_printk_skb: 12 callbacks suppressed
	[ +10.363786] kauditd_printk_skb: 68 callbacks suppressed
	[ +18.605311] systemd-fstab-generator[11779]: Ignoring "noauto" option for root device
	[  +0.304285] systemd-fstab-generator[11836]: Ignoring "noauto" option for root device
	[  +0.303550] systemd-fstab-generator[11891]: Ignoring "noauto" option for root device
	[  +0.195065] systemd-fstab-generator[11909]: Ignoring "noauto" option for root device
	[  +0.319691] systemd-fstab-generator[11937]: Ignoring "noauto" option for root device
	[Mar18 12:36] kauditd_printk_skb: 195 callbacks suppressed
	[  +1.085713] systemd-fstab-generator[12432]: Ignoring "noauto" option for root device
	[  +2.044242] systemd-fstab-generator[12579]: Ignoring "noauto" option for root device
	[  +4.333067] kauditd_printk_skb: 117 callbacks suppressed
	[ +12.311702] kauditd_printk_skb: 5 callbacks suppressed
	[Mar18 12:40] systemd-fstab-generator[14654]: Ignoring "noauto" option for root device
	[  +7.261827] systemd-fstab-generator[14983]: Ignoring "noauto" option for root device
	[  +0.077876] kauditd_printk_skb: 77 callbacks suppressed
	[Mar18 12:41] systemd-fstab-generator[15189]: Ignoring "noauto" option for root device
	[  +0.057070] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.249124] kauditd_printk_skb: 68 callbacks suppressed
	[  +5.299355] kauditd_printk_skb: 15 callbacks suppressed
	[ +15.986954] kauditd_printk_skb: 30 callbacks suppressed
	[  +5.026556] kauditd_printk_skb: 9 callbacks suppressed
	
	
	==> etcd [4c3c68d452b160136cc192cc57dd41dc8aa6e63df46c64307593da705bffe32d] <==
	{"level":"info","ts":"2024-03-18T12:40:54.135501Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T12:40:54.135867Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T12:40:54.135981Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T12:40:54.136019Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T12:41:36.056004Z","caller":"traceutil/trace.go:171","msg":"trace[1391018916] linearizableReadLoop","detail":"{readStateIndex:538; appliedIndex:537; }","duration":"254.06194ms","start":"2024-03-18T12:41:35.801924Z","end":"2024-03-18T12:41:36.055986Z","steps":["trace[1391018916] 'read index received'  (duration: 253.855932ms)","trace[1391018916] 'applied index is now lower than readState.Index'  (duration: 205.434µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T12:41:36.057339Z","caller":"traceutil/trace.go:171","msg":"trace[1606048389] transaction","detail":"{read_only:false; response_revision:523; number_of_response:1; }","duration":"357.836132ms","start":"2024-03-18T12:41:35.699403Z","end":"2024-03-18T12:41:36.057239Z","steps":["trace[1606048389] 'process raft request'  (duration: 356.423671ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:41:36.057875Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:41:35.699379Z","time spent":"358.078511ms","remote":"127.0.0.1:36154","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:522 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-18T12:41:36.058929Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"187.084265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:7765"}
	{"level":"info","ts":"2024-03-18T12:41:36.059408Z","caller":"traceutil/trace.go:171","msg":"trace[1203477122] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:523; }","duration":"187.57236ms","start":"2024-03-18T12:41:35.871826Z","end":"2024-03-18T12:41:36.059398Z","steps":["trace[1203477122] 'agreement among raft nodes before linearized reading'  (duration: 187.024599ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:41:36.058976Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.074705ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T12:41:36.060088Z","caller":"traceutil/trace.go:171","msg":"trace[1640367077] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:523; }","duration":"258.180916ms","start":"2024-03-18T12:41:35.801896Z","end":"2024-03-18T12:41:36.060077Z","steps":["trace[1640367077] 'agreement among raft nodes before linearized reading'  (duration: 257.064921ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:41:40.377652Z","caller":"traceutil/trace.go:171","msg":"trace[1629141782] transaction","detail":"{read_only:false; response_revision:527; number_of_response:1; }","duration":"293.874536ms","start":"2024-03-18T12:41:40.083763Z","end":"2024-03-18T12:41:40.377637Z","steps":["trace[1629141782] 'process raft request'  (duration: 293.764487ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:41:41.947928Z","caller":"traceutil/trace.go:171","msg":"trace[113947201] transaction","detail":"{read_only:false; response_revision:532; number_of_response:1; }","duration":"227.263063ms","start":"2024-03-18T12:41:41.720653Z","end":"2024-03-18T12:41:41.947916Z","steps":["trace[113947201] 'process raft request'  (duration: 226.836008ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:41:42.287773Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.06677ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2334991412845611023 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/default/mysql\" mod_revision:475 > success:<request_put:<key:\"/registry/deployments/default/mysql\" value_size:2222 >> failure:<request_range:<key:\"/registry/deployments/default/mysql\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-18T12:41:42.287875Z","caller":"traceutil/trace.go:171","msg":"trace[1810000307] transaction","detail":"{read_only:false; response_revision:536; number_of_response:1; }","duration":"205.954614ms","start":"2024-03-18T12:41:42.081905Z","end":"2024-03-18T12:41:42.28786Z","steps":["trace[1810000307] 'process raft request'  (duration: 72.571715ms)","trace[1810000307] 'compare'  (duration: 132.88273ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T12:41:44.870186Z","caller":"traceutil/trace.go:171","msg":"trace[1289068684] linearizableReadLoop","detail":"{readStateIndex:555; appliedIndex:554; }","duration":"392.092971ms","start":"2024-03-18T12:41:44.47808Z","end":"2024-03-18T12:41:44.870172Z","steps":["trace[1289068684] 'read index received'  (duration: 391.924401ms)","trace[1289068684] 'applied index is now lower than readState.Index'  (duration: 168.128µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T12:41:44.870578Z","caller":"traceutil/trace.go:171","msg":"trace[7960326] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"437.348126ms","start":"2024-03-18T12:41:44.43321Z","end":"2024-03-18T12:41:44.870558Z","steps":["trace[7960326] 'process raft request'  (duration: 436.837184ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:41:44.87271Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:41:44.433196Z","time spent":"439.45018ms","remote":"127.0.0.1:36154","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1102,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:537 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1029 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-18T12:41:44.87299Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"348.756249ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:7845"}
	{"level":"warn","ts":"2024-03-18T12:41:44.870702Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"392.646596ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:7845"}
	{"level":"info","ts":"2024-03-18T12:41:44.87519Z","caller":"traceutil/trace.go:171","msg":"trace[785006236] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:538; }","duration":"350.96161ms","start":"2024-03-18T12:41:44.524216Z","end":"2024-03-18T12:41:44.875178Z","steps":["trace[785006236] 'agreement among raft nodes before linearized reading'  (duration: 348.658917ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:41:44.875387Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:41:44.524203Z","time spent":"351.168426ms","remote":"127.0.0.1:36174","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":7868,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-03-18T12:41:44.875269Z","caller":"traceutil/trace.go:171","msg":"trace[1408811091] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:538; }","duration":"397.234078ms","start":"2024-03-18T12:41:44.478026Z","end":"2024-03-18T12:41:44.87526Z","steps":["trace[1408811091] 'agreement among raft nodes before linearized reading'  (duration: 392.584398ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:41:44.879935Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T12:41:44.477955Z","time spent":"401.912176ms","remote":"127.0.0.1:36174","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":3,"response size":7868,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-03-18T12:41:54.759697Z","caller":"traceutil/trace.go:171","msg":"trace[1560452464] transaction","detail":"{read_only:false; response_revision:562; number_of_response:1; }","duration":"271.454792ms","start":"2024-03-18T12:41:54.488229Z","end":"2024-03-18T12:41:54.759684Z","steps":["trace[1560452464] 'process raft request'  (duration: 271.344695ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:41:57 up 15 min,  0 users,  load average: 2.24, 0.81, 0.37
	Linux functional-377562 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [78e156c3263945752d6339e3aeab0ab5d7710fb5b2b51829b6da60b36849734a] <==
	I0318 12:40:56.327534       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 12:40:56.327540       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:40:56.327546       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:40:56.346240       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 12:40:57.125459       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0318 12:40:57.129685       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0318 12:40:57.129696       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 12:40:57.776828       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 12:40:57.815050       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 12:40:57.963610       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0318 12:40:57.973619       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.224]
	I0318 12:40:57.974611       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:40:57.979374       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 12:40:58.206146       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 12:40:59.570479       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 12:40:59.591894       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0318 12:40:59.610065       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 12:41:12.748556       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0318 12:41:12.847712       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0318 12:41:19.524549       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.104.144.222"}
	I0318 12:41:24.812619       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.98.212.100"}
	I0318 12:41:26.510672       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.123.147"}
	I0318 12:41:27.469050       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.110.242.124"}
	I0318 12:41:55.912773       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.222.238"}
	I0318 12:41:55.948682       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.27.161"}
	
	
	==> kube-controller-manager [0cfcc1206e1c5bc5ef01f6bfe29f1a64aae81c5059b512acbfc4651c0133fff1] <==
	I0318 12:41:55.585415       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="32.586242ms"
	E0318 12:41:55.585510       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0318 12:41:55.585750       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0318 12:41:55.597561       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="12.012182ms"
	E0318 12:41:55.597604       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0318 12:41:55.597640       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0318 12:41:55.627873       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="100.594205ms"
	E0318 12:41:55.627930       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0318 12:41:55.633601       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="27.570332ms"
	E0318 12:41:55.633747       1 replica_set.go:557] sync "kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" failed with pods "dashboard-metrics-scraper-7fd5cb4ddc-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0318 12:41:55.633899       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"dashboard-metrics-scraper-7fd5cb4ddc-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0318 12:41:55.640515       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.523906ms"
	E0318 12:41:55.640556       1 replica_set.go:557] sync "kubernetes-dashboard/kubernetes-dashboard-8694d4445c" failed with pods "kubernetes-dashboard-8694d4445c-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0318 12:41:55.640639       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"kubernetes-dashboard-8694d4445c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found"
	I0318 12:41:55.669232       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-l2ftg"
	I0318 12:41:55.682170       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 12:41:55.687829       1 event.go:307] "Event occurred" object="default/myclaim" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'k8s.io/minikube-hostpath' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0318 12:41:55.712614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.017105ms"
	I0318 12:41:55.740151       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="27.464243ms"
	I0318 12:41:55.740265       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="49.181µs"
	I0318 12:41:55.762886       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-7fd5cb4ddc-7s4zx"
	I0318 12:41:55.780665       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="143.928µs"
	I0318 12:41:55.808888       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="93.460078ms"
	I0318 12:41:55.884008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="75.046585ms"
	I0318 12:41:55.884229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc" duration="70.618µs"
	
	
	==> kube-proxy [93aa1075b379b829ecbc71ffb5b0a8af235d3736bab053a850429e27a9c70596] <==
	I0318 12:41:13.941386       1 server_others.go:69] "Using iptables proxy"
	I0318 12:41:13.955836       1 node.go:141] Successfully retrieved node IP: 192.168.39.224
	I0318 12:41:14.155766       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:41:14.155816       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:41:14.164082       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:41:14.164160       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:41:14.164466       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:41:14.164476       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:41:14.165883       1 config.go:188] "Starting service config controller"
	I0318 12:41:14.165927       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:41:14.165950       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:41:14.165954       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:41:14.167716       1 config.go:315] "Starting node config controller"
	I0318 12:41:14.167815       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:41:14.266406       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:41:14.266517       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:41:14.269013       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [06820a4737c770b9af163611152684dd3e23dfac652bbf10dbfeb45d45d77999] <==
	W0318 12:40:56.275583       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:40:56.275701       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:40:56.276672       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 12:40:56.276813       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 12:40:56.276827       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 12:40:56.277079       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:40:56.277195       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:40:56.277096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 12:40:57.149111       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 12:40:57.149479       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 12:40:57.159478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 12:40:57.159536       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 12:40:57.350755       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:40:57.350873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 12:40:57.378993       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 12:40:57.379079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 12:40:57.414944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 12:40:57.415021       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 12:40:57.476572       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 12:40:57.476652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 12:40:57.477218       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 12:40:57.477369       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 12:40:57.691743       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 12:40:57.691865       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:41:00.355635       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 12:41:23 functional-377562 kubelet[14990]: I0318 12:41:23.149961   14990 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z58l\" (UniqueName: \"kubernetes.io/projected/7cf182df-68c9-4847-b99d-d029144df008-kube-api-access-7z58l\") pod \"7cf182df-68c9-4847-b99d-d029144df008\" (UID: \"7cf182df-68c9-4847-b99d-d029144df008\") "
	Mar 18 12:41:23 functional-377562 kubelet[14990]: I0318 12:41:23.156086   14990 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cf182df-68c9-4847-b99d-d029144df008-kube-api-access-7z58l" (OuterVolumeSpecName: "kube-api-access-7z58l") pod "7cf182df-68c9-4847-b99d-d029144df008" (UID: "7cf182df-68c9-4847-b99d-d029144df008"). InnerVolumeSpecName "kube-api-access-7z58l". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 12:41:23 functional-377562 kubelet[14990]: I0318 12:41:23.250503   14990 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7z58l\" (UniqueName: \"kubernetes.io/projected/7cf182df-68c9-4847-b99d-d029144df008-kube-api-access-7z58l\") on node \"functional-377562\" DevicePath \"\""
	Mar 18 12:41:24 functional-377562 kubelet[14990]: I0318 12:41:24.921416   14990 topology_manager.go:215] "Topology Admit Handler" podUID="4eed63c8-b398-4c01-b45f-38022afbc70e" podNamespace="default" podName="mysql-859648c796-jqn8r"
	Mar 18 12:41:24 functional-377562 kubelet[14990]: I0318 12:41:24.965901   14990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qjh2\" (UniqueName: \"kubernetes.io/projected/4eed63c8-b398-4c01-b45f-38022afbc70e-kube-api-access-7qjh2\") pod \"mysql-859648c796-jqn8r\" (UID: \"4eed63c8-b398-4c01-b45f-38022afbc70e\") " pod="default/mysql-859648c796-jqn8r"
	Mar 18 12:41:25 functional-377562 kubelet[14990]: I0318 12:41:25.700779   14990 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="7cf182df-68c9-4847-b99d-d029144df008" path="/var/lib/kubelet/pods/7cf182df-68c9-4847-b99d-d029144df008/volumes"
	Mar 18 12:41:26 functional-377562 kubelet[14990]: I0318 12:41:26.410744   14990 topology_manager.go:215] "Topology Admit Handler" podUID="70bcdcd5-960f-4c1a-89fa-2cbebecf47a0" podNamespace="default" podName="hello-node-connect-55497b8b78-vwqbg"
	Mar 18 12:41:26 functional-377562 kubelet[14990]: I0318 12:41:26.481492   14990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bct5d\" (UniqueName: \"kubernetes.io/projected/70bcdcd5-960f-4c1a-89fa-2cbebecf47a0-kube-api-access-bct5d\") pod \"hello-node-connect-55497b8b78-vwqbg\" (UID: \"70bcdcd5-960f-4c1a-89fa-2cbebecf47a0\") " pod="default/hello-node-connect-55497b8b78-vwqbg"
	Mar 18 12:41:27 functional-377562 kubelet[14990]: I0318 12:41:27.390762   14990 topology_manager.go:215] "Topology Admit Handler" podUID="397bcd4a-c19a-4699-8592-38466b9f477d" podNamespace="default" podName="hello-node-d7447cc7f-tj5fw"
	Mar 18 12:41:27 functional-377562 kubelet[14990]: I0318 12:41:27.489145   14990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfszw\" (UniqueName: \"kubernetes.io/projected/397bcd4a-c19a-4699-8592-38466b9f477d-kube-api-access-sfszw\") pod \"hello-node-d7447cc7f-tj5fw\" (UID: \"397bcd4a-c19a-4699-8592-38466b9f477d\") " pod="default/hello-node-d7447cc7f-tj5fw"
	Mar 18 12:41:41 functional-377562 kubelet[14990]: I0318 12:41:41.953988   14990 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/mysql-859648c796-jqn8r" podStartSLOduration=2.647470027 podCreationTimestamp="2024-03-18 12:41:24 +0000 UTC" firstStartedPulling="2024-03-18 12:41:25.441143084 +0000 UTC m=+25.903315629" lastFinishedPulling="2024-03-18 12:41:40.747622886 +0000 UTC m=+41.209795418" observedRunningTime="2024-03-18 12:41:41.953677642 +0000 UTC m=+42.415850171" watchObservedRunningTime="2024-03-18 12:41:41.953949816 +0000 UTC m=+42.416122363"
	Mar 18 12:41:45 functional-377562 kubelet[14990]: I0318 12:41:45.788517   14990 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-node-d7447cc7f-tj5fw" podStartSLOduration=1.402159889 podCreationTimestamp="2024-03-18 12:41:27 +0000 UTC" firstStartedPulling="2024-03-18 12:41:28.023819867 +0000 UTC m=+28.485992397" lastFinishedPulling="2024-03-18 12:41:45.41013917 +0000 UTC m=+45.872311703" observedRunningTime="2024-03-18 12:41:45.736101979 +0000 UTC m=+46.198274526" watchObservedRunningTime="2024-03-18 12:41:45.788479195 +0000 UTC m=+46.250651747"
	Mar 18 12:41:54 functional-377562 kubelet[14990]: I0318 12:41:54.778184   14990 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-node-connect-55497b8b78-vwqbg" podStartSLOduration=10.361881852 podCreationTimestamp="2024-03-18 12:41:26 +0000 UTC" firstStartedPulling="2024-03-18 12:41:26.913945641 +0000 UTC m=+27.376118184" lastFinishedPulling="2024-03-18 12:41:45.33000298 +0000 UTC m=+45.792175519" observedRunningTime="2024-03-18 12:41:45.789507463 +0000 UTC m=+46.251679994" watchObservedRunningTime="2024-03-18 12:41:54.777939187 +0000 UTC m=+55.240111738"
	Mar 18 12:41:54 functional-377562 kubelet[14990]: I0318 12:41:54.780249   14990 topology_manager.go:215] "Topology Admit Handler" podUID="fb628a15-8a79-4e4b-98e4-8725b1363bfd" podNamespace="default" podName="busybox-mount"
	Mar 18 12:41:54 functional-377562 kubelet[14990]: I0318 12:41:54.926507   14990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87ggw\" (UniqueName: \"kubernetes.io/projected/fb628a15-8a79-4e4b-98e4-8725b1363bfd-kube-api-access-87ggw\") pod \"busybox-mount\" (UID: \"fb628a15-8a79-4e4b-98e4-8725b1363bfd\") " pod="default/busybox-mount"
	Mar 18 12:41:54 functional-377562 kubelet[14990]: I0318 12:41:54.926585   14990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/fb628a15-8a79-4e4b-98e4-8725b1363bfd-test-volume\") pod \"busybox-mount\" (UID: \"fb628a15-8a79-4e4b-98e4-8725b1363bfd\") " pod="default/busybox-mount"
	Mar 18 12:41:55 functional-377562 kubelet[14990]: I0318 12:41:55.706991   14990 topology_manager.go:215] "Topology Admit Handler" podUID="7e58490b-c663-4819-84d3-eeae89086154" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-l2ftg"
	Mar 18 12:41:55 functional-377562 kubelet[14990]: I0318 12:41:55.733244   14990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nshmt\" (UniqueName: \"kubernetes.io/projected/7e58490b-c663-4819-84d3-eeae89086154-kube-api-access-nshmt\") pod \"kubernetes-dashboard-8694d4445c-l2ftg\" (UID: \"7e58490b-c663-4819-84d3-eeae89086154\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-l2ftg"
	Mar 18 12:41:55 functional-377562 kubelet[14990]: I0318 12:41:55.733352   14990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7e58490b-c663-4819-84d3-eeae89086154-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-l2ftg\" (UID: \"7e58490b-c663-4819-84d3-eeae89086154\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-l2ftg"
	Mar 18 12:41:55 functional-377562 kubelet[14990]: I0318 12:41:55.809933   14990 topology_manager.go:215] "Topology Admit Handler" podUID="f2ae3c6d-e186-4b7c-966e-19133223e650" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-7fd5cb4ddc-7s4zx"
	Mar 18 12:41:55 functional-377562 kubelet[14990]: I0318 12:41:55.934618   14990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkp6t\" (UniqueName: \"kubernetes.io/projected/f2ae3c6d-e186-4b7c-966e-19133223e650-kube-api-access-tkp6t\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-7s4zx\" (UID: \"f2ae3c6d-e186-4b7c-966e-19133223e650\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-7s4zx"
	Mar 18 12:41:55 functional-377562 kubelet[14990]: I0318 12:41:55.934841   14990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f2ae3c6d-e186-4b7c-966e-19133223e650-tmp-volume\") pod \"dashboard-metrics-scraper-7fd5cb4ddc-7s4zx\" (UID: \"f2ae3c6d-e186-4b7c-966e-19133223e650\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-7fd5cb4ddc-7s4zx"
	Mar 18 12:41:57 functional-377562 kubelet[14990]: I0318 12:41:57.557844   14990 topology_manager.go:215] "Topology Admit Handler" podUID="45689916-7ca8-4bcd-9828-fd86c56f79c6" podNamespace="default" podName="sp-pod"
	Mar 18 12:41:57 functional-377562 kubelet[14990]: I0318 12:41:57.753271   14990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dk6xt\" (UniqueName: \"kubernetes.io/projected/45689916-7ca8-4bcd-9828-fd86c56f79c6-kube-api-access-dk6xt\") pod \"sp-pod\" (UID: \"45689916-7ca8-4bcd-9828-fd86c56f79c6\") " pod="default/sp-pod"
	Mar 18 12:41:57 functional-377562 kubelet[14990]: I0318 12:41:57.753557   14990 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-85842bd8-5102-4c21-802c-23047154e3ea\" (UniqueName: \"kubernetes.io/host-path/45689916-7ca8-4bcd-9828-fd86c56f79c6-pvc-85842bd8-5102-4c21-802c-23047154e3ea\") pod \"sp-pod\" (UID: \"45689916-7ca8-4bcd-9828-fd86c56f79c6\") " pod="default/sp-pod"
	
	
	==> storage-provisioner [7a5fb186fabe31c0de5ea5591df8f741c31d7d037ffcf8f6fb365bffa013ea96] <==
	I0318 12:41:15.557815       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 12:41:15.568159       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 12:41:15.568235       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 12:41:15.580247       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 12:41:15.580697       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-377562_4ad42192-d1e9-4160-b045-d00aff1a2329!
	I0318 12:41:15.580507       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a232d056-7528-4baa-ad52-332c3cb09aff", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-377562_4ad42192-d1e9-4160-b045-d00aff1a2329 became leader
	I0318 12:41:15.681532       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-377562_4ad42192-d1e9-4160-b045-d00aff1a2329!
	I0318 12:41:55.701562       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I0318 12:41:55.701700       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    d27a069c-8c09-42d1-8b82-25d3a7693013 392 0 2024-03-18 12:41:14 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-03-18 12:41:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-85842bd8-5102-4c21-802c-23047154e3ea &PersistentVolumeClaim{ObjectMeta:{myclaim  default  85842bd8-5102-4c21-802c-23047154e3ea 597 0 2024-03-18 12:41:55 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-03-18 12:41:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-03-18 12:41:55 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I0318 12:41:55.702205       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-85842bd8-5102-4c21-802c-23047154e3ea" provisioned
	I0318 12:41:55.702218       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I0318 12:41:55.702228       1 volume_store.go:212] Trying to save persistentvolume "pvc-85842bd8-5102-4c21-802c-23047154e3ea"
	I0318 12:41:55.703829       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"85842bd8-5102-4c21-802c-23047154e3ea", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I0318 12:41:55.797958       1 volume_store.go:219] persistentvolume "pvc-85842bd8-5102-4c21-802c-23047154e3ea" saved
	I0318 12:41:55.800370       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"85842bd8-5102-4c21-802c-23047154e3ea", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-85842bd8-5102-4c21-802c-23047154e3ea
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-377562 -n functional-377562
helpers_test.go:261: (dbg) Run:  kubectl --context functional-377562 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod dashboard-metrics-scraper-7fd5cb4ddc-7s4zx kubernetes-dashboard-8694d4445c-l2ftg
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-377562 describe pod busybox-mount sp-pod dashboard-metrics-scraper-7fd5cb4ddc-7s4zx kubernetes-dashboard-8694d4445c-l2ftg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context functional-377562 describe pod busybox-mount sp-pod dashboard-metrics-scraper-7fd5cb4ddc-7s4zx kubernetes-dashboard-8694d4445c-l2ftg: exit status 1 (89.280447ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-377562/192.168.39.224
	Start Time:       Mon, 18 Mar 2024 12:41:54 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  mount-munger:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-87ggw (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-87ggw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  4s    default-scheduler  Successfully assigned default/busybox-mount to functional-377562
	  Normal  Pulling    3s    kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-377562/192.168.39.224
	Start Time:       Mon, 18 Mar 2024 12:41:57 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dk6xt (ro)
	Conditions:
	  Type              Status
	  Initialized       True 
	  Ready             False 
	  ContainersReady   False 
	  PodScheduled      True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-dk6xt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  1s    default-scheduler  Successfully assigned default/sp-pod to functional-377562
	  Normal  Pulling    0s    kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-7fd5cb4ddc-7s4zx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-l2ftg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context functional-377562 describe pod busybox-mount sp-pod dashboard-metrics-scraper-7fd5cb4ddc-7s4zx kubernetes-dashboard-8694d4445c-l2ftg: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (4.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image load --daemon gcr.io/google-containers/addon-resizer:functional-377562 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 image load --daemon gcr.io/google-containers/addon-resizer:functional-377562 --alsologtostderr: (8.432428431s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 image ls: (2.364509592s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-377562" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (10.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 node stop m02 -v=7 --alsologtostderr
E0318 12:49:08.749911 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:49:30.296979 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.511097361s)

                                                
                                                
-- stdout --
	* Stopping node "ha-328109-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:47:47.443480 1129518 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:47:47.443736 1129518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:47:47.443745 1129518 out.go:304] Setting ErrFile to fd 2...
	I0318 12:47:47.443749 1129518 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:47:47.443924 1129518 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:47:47.444266 1129518 mustload.go:65] Loading cluster: ha-328109
	I0318 12:47:47.444705 1129518 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:47:47.444724 1129518 stop.go:39] StopHost: ha-328109-m02
	I0318 12:47:47.445137 1129518 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:47:47.445185 1129518 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:47:47.464087 1129518 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I0318 12:47:47.464603 1129518 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:47:47.465272 1129518 main.go:141] libmachine: Using API Version  1
	I0318 12:47:47.465302 1129518 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:47:47.465678 1129518 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:47:47.468573 1129518 out.go:177] * Stopping node "ha-328109-m02"  ...
	I0318 12:47:47.470017 1129518 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 12:47:47.470049 1129518 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:47:47.470279 1129518 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 12:47:47.470301 1129518 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:47:47.473291 1129518 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:47:47.473804 1129518 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:47:47.473859 1129518 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:47:47.474040 1129518 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:47:47.474294 1129518 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:47:47.474516 1129518 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:47:47.474680 1129518 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	I0318 12:47:47.566256 1129518 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 12:47:47.627610 1129518 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 12:47:47.684409 1129518 main.go:141] libmachine: Stopping "ha-328109-m02"...
	I0318 12:47:47.684446 1129518 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:47:47.686143 1129518 main.go:141] libmachine: (ha-328109-m02) Calling .Stop
	I0318 12:47:47.690819 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 0/120
	I0318 12:47:48.692833 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 1/120
	I0318 12:47:49.694868 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 2/120
	I0318 12:47:50.696406 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 3/120
	I0318 12:47:51.697870 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 4/120
	I0318 12:47:52.699840 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 5/120
	I0318 12:47:53.701270 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 6/120
	I0318 12:47:54.702731 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 7/120
	I0318 12:47:55.704053 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 8/120
	I0318 12:47:56.705239 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 9/120
	I0318 12:47:57.707524 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 10/120
	I0318 12:47:58.708984 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 11/120
	I0318 12:47:59.710674 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 12/120
	I0318 12:48:00.711976 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 13/120
	I0318 12:48:01.713426 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 14/120
	I0318 12:48:02.715408 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 15/120
	I0318 12:48:03.716795 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 16/120
	I0318 12:48:04.718685 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 17/120
	I0318 12:48:05.720953 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 18/120
	I0318 12:48:06.722168 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 19/120
	I0318 12:48:07.724580 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 20/120
	I0318 12:48:08.726873 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 21/120
	I0318 12:48:09.728235 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 22/120
	I0318 12:48:10.729547 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 23/120
	I0318 12:48:11.731690 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 24/120
	I0318 12:48:12.733527 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 25/120
	I0318 12:48:13.735349 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 26/120
	I0318 12:48:14.736810 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 27/120
	I0318 12:48:15.738056 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 28/120
	I0318 12:48:16.740134 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 29/120
	I0318 12:48:17.742196 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 30/120
	I0318 12:48:18.743573 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 31/120
	I0318 12:48:19.745098 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 32/120
	I0318 12:48:20.747082 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 33/120
	I0318 12:48:21.748631 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 34/120
	I0318 12:48:22.750641 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 35/120
	I0318 12:48:23.752137 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 36/120
	I0318 12:48:24.754457 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 37/120
	I0318 12:48:25.756415 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 38/120
	I0318 12:48:26.757612 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 39/120
	I0318 12:48:27.759703 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 40/120
	I0318 12:48:28.762167 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 41/120
	I0318 12:48:29.763459 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 42/120
	I0318 12:48:30.764890 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 43/120
	I0318 12:48:31.766748 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 44/120
	I0318 12:48:32.768148 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 45/120
	I0318 12:48:33.769418 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 46/120
	I0318 12:48:34.770794 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 47/120
	I0318 12:48:35.772201 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 48/120
	I0318 12:48:36.773614 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 49/120
	I0318 12:48:37.775670 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 50/120
	I0318 12:48:38.776977 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 51/120
	I0318 12:48:39.778464 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 52/120
	I0318 12:48:40.780656 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 53/120
	I0318 12:48:41.782764 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 54/120
	I0318 12:48:42.784733 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 55/120
	I0318 12:48:43.786889 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 56/120
	I0318 12:48:44.788195 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 57/120
	I0318 12:48:45.789908 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 58/120
	I0318 12:48:46.791390 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 59/120
	I0318 12:48:47.793445 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 60/120
	I0318 12:48:48.794915 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 61/120
	I0318 12:48:49.797027 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 62/120
	I0318 12:48:50.799413 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 63/120
	I0318 12:48:51.801035 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 64/120
	I0318 12:48:52.802761 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 65/120
	I0318 12:48:53.804415 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 66/120
	I0318 12:48:54.805742 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 67/120
	I0318 12:48:55.807301 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 68/120
	I0318 12:48:56.808601 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 69/120
	I0318 12:48:57.810870 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 70/120
	I0318 12:48:58.812468 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 71/120
	I0318 12:48:59.813816 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 72/120
	I0318 12:49:00.815096 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 73/120
	I0318 12:49:01.816356 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 74/120
	I0318 12:49:02.817836 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 75/120
	I0318 12:49:03.819229 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 76/120
	I0318 12:49:04.820733 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 77/120
	I0318 12:49:05.822154 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 78/120
	I0318 12:49:06.823660 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 79/120
	I0318 12:49:07.825657 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 80/120
	I0318 12:49:08.827096 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 81/120
	I0318 12:49:09.828464 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 82/120
	I0318 12:49:10.829821 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 83/120
	I0318 12:49:11.831160 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 84/120
	I0318 12:49:12.833287 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 85/120
	I0318 12:49:13.834512 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 86/120
	I0318 12:49:14.835899 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 87/120
	I0318 12:49:15.837431 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 88/120
	I0318 12:49:16.839071 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 89/120
	I0318 12:49:17.841258 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 90/120
	I0318 12:49:18.843307 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 91/120
	I0318 12:49:19.845182 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 92/120
	I0318 12:49:20.846847 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 93/120
	I0318 12:49:21.848393 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 94/120
	I0318 12:49:22.850482 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 95/120
	I0318 12:49:23.851743 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 96/120
	I0318 12:49:24.853195 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 97/120
	I0318 12:49:25.854540 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 98/120
	I0318 12:49:26.856142 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 99/120
	I0318 12:49:27.858600 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 100/120
	I0318 12:49:28.860656 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 101/120
	I0318 12:49:29.863247 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 102/120
	I0318 12:49:30.865109 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 103/120
	I0318 12:49:31.867225 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 104/120
	I0318 12:49:32.868770 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 105/120
	I0318 12:49:33.870922 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 106/120
	I0318 12:49:34.872427 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 107/120
	I0318 12:49:35.873689 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 108/120
	I0318 12:49:36.874976 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 109/120
	I0318 12:49:37.877378 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 110/120
	I0318 12:49:38.878785 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 111/120
	I0318 12:49:39.880636 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 112/120
	I0318 12:49:40.882667 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 113/120
	I0318 12:49:41.884061 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 114/120
	I0318 12:49:42.885662 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 115/120
	I0318 12:49:43.887669 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 116/120
	I0318 12:49:44.889506 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 117/120
	I0318 12:49:45.890884 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 118/120
	I0318 12:49:46.892833 1129518 main.go:141] libmachine: (ha-328109-m02) Waiting for machine to stop 119/120
	I0318 12:49:47.894195 1129518 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 12:49:47.894431 1129518 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-328109 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr: exit status 3 (19.072100951s)

                                                
                                                
-- stdout --
	ha-328109
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-328109-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:49:47.956846 1129814 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:49:47.957028 1129814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:49:47.957082 1129814 out.go:304] Setting ErrFile to fd 2...
	I0318 12:49:47.957097 1129814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:49:47.957408 1129814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:49:47.957665 1129814 out.go:298] Setting JSON to false
	I0318 12:49:47.957724 1129814 mustload.go:65] Loading cluster: ha-328109
	I0318 12:49:47.957831 1129814 notify.go:220] Checking for updates...
	I0318 12:49:47.959338 1129814 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:49:47.959426 1129814 status.go:255] checking status of ha-328109 ...
	I0318 12:49:47.960674 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:49:47.960752 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:49:47.978939 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35361
	I0318 12:49:47.979484 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:49:47.980056 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:49:47.980083 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:49:47.980599 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:49:47.980896 1129814 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:49:47.982809 1129814 status.go:330] ha-328109 host status = "Running" (err=<nil>)
	I0318 12:49:47.982833 1129814 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:49:47.983124 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:49:47.983170 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:49:47.998478 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I0318 12:49:47.998935 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:49:47.999396 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:49:47.999421 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:49:47.999771 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:49:47.999979 1129814 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:49:48.002793 1129814 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:49:48.003238 1129814 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:49:48.003262 1129814 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:49:48.003445 1129814 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:49:48.003861 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:49:48.003924 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:49:48.019456 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42273
	I0318 12:49:48.019869 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:49:48.020353 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:49:48.020379 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:49:48.020708 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:49:48.020935 1129814 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:49:48.021129 1129814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:49:48.021164 1129814 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:49:48.023904 1129814 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:49:48.024390 1129814 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:49:48.024424 1129814 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:49:48.024512 1129814 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:49:48.024716 1129814 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:49:48.024910 1129814 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:49:48.025057 1129814 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:49:48.116207 1129814 ssh_runner.go:195] Run: systemctl --version
	I0318 12:49:48.124073 1129814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:49:48.143141 1129814 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:49:48.143170 1129814 api_server.go:166] Checking apiserver status ...
	I0318 12:49:48.143203 1129814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:49:48.159707 1129814 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0318 12:49:48.174806 1129814 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:49:48.174871 1129814 ssh_runner.go:195] Run: ls
	I0318 12:49:48.181192 1129814 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:49:48.186074 1129814 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:49:48.186099 1129814 status.go:422] ha-328109 apiserver status = Running (err=<nil>)
	I0318 12:49:48.186109 1129814 status.go:257] ha-328109 status: &{Name:ha-328109 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:49:48.186128 1129814 status.go:255] checking status of ha-328109-m02 ...
	I0318 12:49:48.186467 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:49:48.186509 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:49:48.202481 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37587
	I0318 12:49:48.202956 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:49:48.203467 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:49:48.203486 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:49:48.203809 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:49:48.204014 1129814 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:49:48.205669 1129814 status.go:330] ha-328109-m02 host status = "Running" (err=<nil>)
	I0318 12:49:48.205691 1129814 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:49:48.206083 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:49:48.206178 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:49:48.222283 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32835
	I0318 12:49:48.222743 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:49:48.223246 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:49:48.223270 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:49:48.223687 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:49:48.223865 1129814 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:49:48.226470 1129814 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:49:48.226883 1129814 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:49:48.226912 1129814 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:49:48.227132 1129814 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:49:48.227449 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:49:48.227498 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:49:48.242313 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39739
	I0318 12:49:48.242842 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:49:48.243329 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:49:48.243348 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:49:48.243654 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:49:48.243823 1129814 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:49:48.243997 1129814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:49:48.244024 1129814 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:49:48.246680 1129814 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:49:48.247136 1129814 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:49:48.247175 1129814 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:49:48.247325 1129814 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:49:48.247484 1129814 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:49:48.247655 1129814 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:49:48.247777 1129814 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	W0318 12:50:06.572547 1129814 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.246:22: connect: no route to host
	W0318 12:50:06.572663 1129814 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	E0318 12:50:06.572694 1129814 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:06.572706 1129814 status.go:257] ha-328109-m02 status: &{Name:ha-328109-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 12:50:06.572731 1129814 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:06.572738 1129814 status.go:255] checking status of ha-328109-m03 ...
	I0318 12:50:06.573074 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:06.573136 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:06.588064 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40117
	I0318 12:50:06.588635 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:06.589228 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:50:06.589251 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:06.589587 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:06.589794 1129814 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:50:06.591320 1129814 status.go:330] ha-328109-m03 host status = "Running" (err=<nil>)
	I0318 12:50:06.591346 1129814 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:06.591683 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:06.591720 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:06.606163 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36819
	I0318 12:50:06.606577 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:06.607040 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:50:06.607098 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:06.607414 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:06.607583 1129814 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:50:06.610318 1129814 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:06.610855 1129814 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:06.610889 1129814 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:06.611065 1129814 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:06.611388 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:06.611431 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:06.625674 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46677
	I0318 12:50:06.626096 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:06.626563 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:50:06.626583 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:06.626917 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:06.627106 1129814 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:50:06.627308 1129814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:06.627350 1129814 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:50:06.630057 1129814 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:06.630459 1129814 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:06.630492 1129814 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:06.630685 1129814 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:50:06.630856 1129814 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:50:06.631029 1129814 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:50:06.631139 1129814 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:50:06.727448 1129814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:06.756032 1129814 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:06.756062 1129814 api_server.go:166] Checking apiserver status ...
	I0318 12:50:06.756108 1129814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:06.779620 1129814 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	W0318 12:50:06.793533 1129814 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:06.793604 1129814 ssh_runner.go:195] Run: ls
	I0318 12:50:06.798414 1129814 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:06.804571 1129814 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:06.804598 1129814 status.go:422] ha-328109-m03 apiserver status = Running (err=<nil>)
	I0318 12:50:06.804609 1129814 status.go:257] ha-328109-m03 status: &{Name:ha-328109-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:06.804637 1129814 status.go:255] checking status of ha-328109-m04 ...
	I0318 12:50:06.804976 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:06.805014 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:06.820153 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35555
	I0318 12:50:06.820572 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:06.821104 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:50:06.821140 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:06.821466 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:06.821685 1129814 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:50:06.823170 1129814 status.go:330] ha-328109-m04 host status = "Running" (err=<nil>)
	I0318 12:50:06.823189 1129814 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:06.823480 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:06.823523 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:06.838398 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36097
	I0318 12:50:06.838758 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:06.839196 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:50:06.839218 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:06.839595 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:06.839794 1129814 main.go:141] libmachine: (ha-328109-m04) Calling .GetIP
	I0318 12:50:06.842636 1129814 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:06.843081 1129814 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:06.843119 1129814 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:06.843253 1129814 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:06.843548 1129814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:06.843601 1129814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:06.858280 1129814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44067
	I0318 12:50:06.858739 1129814 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:06.859296 1129814 main.go:141] libmachine: Using API Version  1
	I0318 12:50:06.859323 1129814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:06.859702 1129814 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:06.859890 1129814 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:50:06.860089 1129814 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:06.860107 1129814 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:50:06.862971 1129814 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:06.863351 1129814 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:06.863374 1129814 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:06.863489 1129814 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:50:06.863647 1129814 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:50:06.863778 1129814 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:50:06.863949 1129814 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	I0318 12:50:06.946532 1129814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:06.965967 1129814 status.go:257] ha-328109-m04 status: &{Name:ha-328109-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-328109 -n ha-328109
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-328109 logs -n 25: (1.662063745s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1988805859/001/cp-test_ha-328109-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109:/home/docker/cp-test_ha-328109-m03_ha-328109.txt                       |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109 sudo cat                                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m03_ha-328109.txt                                 |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m02:/home/docker/cp-test_ha-328109-m03_ha-328109-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m02 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m03_ha-328109-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04:/home/docker/cp-test_ha-328109-m03_ha-328109-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m04 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m03_ha-328109-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp testdata/cp-test.txt                                                | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1988805859/001/cp-test_ha-328109-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109:/home/docker/cp-test_ha-328109-m04_ha-328109.txt                       |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109 sudo cat                                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109.txt                                 |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m02:/home/docker/cp-test_ha-328109-m04_ha-328109-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m02 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03:/home/docker/cp-test_ha-328109-m04_ha-328109-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m03 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-328109 node stop m02 -v=7                                                     | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:42:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:42:32.893274 1125718 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:42:32.893417 1125718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:42:32.893429 1125718 out.go:304] Setting ErrFile to fd 2...
	I0318 12:42:32.893436 1125718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:42:32.893642 1125718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:42:32.894210 1125718 out.go:298] Setting JSON to false
	I0318 12:42:32.895115 1125718 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":15900,"bootTime":1710749853,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:42:32.895185 1125718 start.go:139] virtualization: kvm guest
	I0318 12:42:32.897324 1125718 out.go:177] * [ha-328109] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:42:32.899161 1125718 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 12:42:32.899200 1125718 notify.go:220] Checking for updates...
	I0318 12:42:32.900581 1125718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:42:32.902066 1125718 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:42:32.903366 1125718 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:42:32.904691 1125718 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 12:42:32.906034 1125718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:42:32.907495 1125718 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:42:32.941434 1125718 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 12:42:32.942748 1125718 start.go:297] selected driver: kvm2
	I0318 12:42:32.942769 1125718 start.go:901] validating driver "kvm2" against <nil>
	I0318 12:42:32.942782 1125718 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:42:32.943513 1125718 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:42:32.943590 1125718 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:42:32.958284 1125718 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:42:32.958383 1125718 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:42:32.958600 1125718 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:42:32.958650 1125718 cni.go:84] Creating CNI manager for ""
	I0318 12:42:32.958664 1125718 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 12:42:32.958669 1125718 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 12:42:32.958731 1125718 start.go:340] cluster config:
	{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0318 12:42:32.958820 1125718 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:42:32.960621 1125718 out.go:177] * Starting "ha-328109" primary control-plane node in "ha-328109" cluster
	I0318 12:42:32.961853 1125718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:42:32.961885 1125718 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 12:42:32.961894 1125718 cache.go:56] Caching tarball of preloaded images
	I0318 12:42:32.961983 1125718 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 12:42:32.961996 1125718 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 12:42:32.962310 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:42:32.962340 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json: {Name:mk6731ec1f8b636473e57fa4c832d7a65e6cf7d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:42:32.962505 1125718 start.go:360] acquireMachinesLock for ha-328109: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:42:32.962541 1125718 start.go:364] duration metric: took 19.07µs to acquireMachinesLock for "ha-328109"
	I0318 12:42:32.962564 1125718 start.go:93] Provisioning new machine with config: &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:42:32.962628 1125718 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 12:42:32.964262 1125718 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 12:42:32.964426 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:42:32.964476 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:42:32.978932 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0318 12:42:32.979368 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:42:32.979906 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:42:32.979928 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:42:32.980370 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:42:32.980585 1125718 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:42:32.980747 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:32.980877 1125718 start.go:159] libmachine.API.Create for "ha-328109" (driver="kvm2")
	I0318 12:42:32.980908 1125718 client.go:168] LocalClient.Create starting
	I0318 12:42:32.980948 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem
	I0318 12:42:32.980983 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:42:32.980999 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:42:32.981065 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem
	I0318 12:42:32.981083 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:42:32.981095 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:42:32.981108 1125718 main.go:141] libmachine: Running pre-create checks...
	I0318 12:42:32.981122 1125718 main.go:141] libmachine: (ha-328109) Calling .PreCreateCheck
	I0318 12:42:32.981478 1125718 main.go:141] libmachine: (ha-328109) Calling .GetConfigRaw
	I0318 12:42:32.981854 1125718 main.go:141] libmachine: Creating machine...
	I0318 12:42:32.981868 1125718 main.go:141] libmachine: (ha-328109) Calling .Create
	I0318 12:42:32.981979 1125718 main.go:141] libmachine: (ha-328109) Creating KVM machine...
	I0318 12:42:32.983166 1125718 main.go:141] libmachine: (ha-328109) DBG | found existing default KVM network
	I0318 12:42:32.983871 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:32.983700 1125741 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0318 12:42:32.983923 1125718 main.go:141] libmachine: (ha-328109) DBG | created network xml: 
	I0318 12:42:32.983949 1125718 main.go:141] libmachine: (ha-328109) DBG | <network>
	I0318 12:42:32.983964 1125718 main.go:141] libmachine: (ha-328109) DBG |   <name>mk-ha-328109</name>
	I0318 12:42:32.983973 1125718 main.go:141] libmachine: (ha-328109) DBG |   <dns enable='no'/>
	I0318 12:42:32.983986 1125718 main.go:141] libmachine: (ha-328109) DBG |   
	I0318 12:42:32.983999 1125718 main.go:141] libmachine: (ha-328109) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 12:42:32.984011 1125718 main.go:141] libmachine: (ha-328109) DBG |     <dhcp>
	I0318 12:42:32.984028 1125718 main.go:141] libmachine: (ha-328109) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 12:42:32.984039 1125718 main.go:141] libmachine: (ha-328109) DBG |     </dhcp>
	I0318 12:42:32.984050 1125718 main.go:141] libmachine: (ha-328109) DBG |   </ip>
	I0318 12:42:32.984059 1125718 main.go:141] libmachine: (ha-328109) DBG |   
	I0318 12:42:32.984066 1125718 main.go:141] libmachine: (ha-328109) DBG | </network>
	I0318 12:42:32.984072 1125718 main.go:141] libmachine: (ha-328109) DBG | 
	I0318 12:42:32.989522 1125718 main.go:141] libmachine: (ha-328109) DBG | trying to create private KVM network mk-ha-328109 192.168.39.0/24...
	I0318 12:42:33.054160 1125718 main.go:141] libmachine: (ha-328109) DBG | private KVM network mk-ha-328109 192.168.39.0/24 created
	I0318 12:42:33.054199 1125718 main.go:141] libmachine: (ha-328109) Setting up store path in /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109 ...
	I0318 12:42:33.054217 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:33.054107 1125741 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:42:33.054249 1125718 main.go:141] libmachine: (ha-328109) Building disk image from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 12:42:33.054268 1125718 main.go:141] libmachine: (ha-328109) Downloading /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:42:33.312864 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:33.312752 1125741 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa...
	I0318 12:42:33.446094 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:33.445958 1125741 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/ha-328109.rawdisk...
	I0318 12:42:33.446128 1125718 main.go:141] libmachine: (ha-328109) DBG | Writing magic tar header
	I0318 12:42:33.446138 1125718 main.go:141] libmachine: (ha-328109) DBG | Writing SSH key tar header
	I0318 12:42:33.446176 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:33.446142 1125741 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109 ...
	I0318 12:42:33.446270 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109
	I0318 12:42:33.446306 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines
	I0318 12:42:33.446332 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109 (perms=drwx------)
	I0318 12:42:33.446350 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines (perms=drwxr-xr-x)
	I0318 12:42:33.446362 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube (perms=drwxr-xr-x)
	I0318 12:42:33.446372 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:42:33.446381 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816
	I0318 12:42:33.446395 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816 (perms=drwxrwxr-x)
	I0318 12:42:33.446409 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 12:42:33.446430 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins
	I0318 12:42:33.446442 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home
	I0318 12:42:33.446455 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 12:42:33.446471 1125718 main.go:141] libmachine: (ha-328109) DBG | Skipping /home - not owner
	I0318 12:42:33.446483 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 12:42:33.446495 1125718 main.go:141] libmachine: (ha-328109) Creating domain...
	I0318 12:42:33.447565 1125718 main.go:141] libmachine: (ha-328109) define libvirt domain using xml: 
	I0318 12:42:33.447591 1125718 main.go:141] libmachine: (ha-328109) <domain type='kvm'>
	I0318 12:42:33.447617 1125718 main.go:141] libmachine: (ha-328109)   <name>ha-328109</name>
	I0318 12:42:33.447635 1125718 main.go:141] libmachine: (ha-328109)   <memory unit='MiB'>2200</memory>
	I0318 12:42:33.447643 1125718 main.go:141] libmachine: (ha-328109)   <vcpu>2</vcpu>
	I0318 12:42:33.447649 1125718 main.go:141] libmachine: (ha-328109)   <features>
	I0318 12:42:33.447658 1125718 main.go:141] libmachine: (ha-328109)     <acpi/>
	I0318 12:42:33.447668 1125718 main.go:141] libmachine: (ha-328109)     <apic/>
	I0318 12:42:33.447675 1125718 main.go:141] libmachine: (ha-328109)     <pae/>
	I0318 12:42:33.447686 1125718 main.go:141] libmachine: (ha-328109)     
	I0318 12:42:33.447698 1125718 main.go:141] libmachine: (ha-328109)   </features>
	I0318 12:42:33.447712 1125718 main.go:141] libmachine: (ha-328109)   <cpu mode='host-passthrough'>
	I0318 12:42:33.447723 1125718 main.go:141] libmachine: (ha-328109)   
	I0318 12:42:33.447732 1125718 main.go:141] libmachine: (ha-328109)   </cpu>
	I0318 12:42:33.447741 1125718 main.go:141] libmachine: (ha-328109)   <os>
	I0318 12:42:33.447763 1125718 main.go:141] libmachine: (ha-328109)     <type>hvm</type>
	I0318 12:42:33.447773 1125718 main.go:141] libmachine: (ha-328109)     <boot dev='cdrom'/>
	I0318 12:42:33.447786 1125718 main.go:141] libmachine: (ha-328109)     <boot dev='hd'/>
	I0318 12:42:33.447801 1125718 main.go:141] libmachine: (ha-328109)     <bootmenu enable='no'/>
	I0318 12:42:33.447810 1125718 main.go:141] libmachine: (ha-328109)   </os>
	I0318 12:42:33.447818 1125718 main.go:141] libmachine: (ha-328109)   <devices>
	I0318 12:42:33.447828 1125718 main.go:141] libmachine: (ha-328109)     <disk type='file' device='cdrom'>
	I0318 12:42:33.447840 1125718 main.go:141] libmachine: (ha-328109)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/boot2docker.iso'/>
	I0318 12:42:33.447850 1125718 main.go:141] libmachine: (ha-328109)       <target dev='hdc' bus='scsi'/>
	I0318 12:42:33.447905 1125718 main.go:141] libmachine: (ha-328109)       <readonly/>
	I0318 12:42:33.447937 1125718 main.go:141] libmachine: (ha-328109)     </disk>
	I0318 12:42:33.447950 1125718 main.go:141] libmachine: (ha-328109)     <disk type='file' device='disk'>
	I0318 12:42:33.447959 1125718 main.go:141] libmachine: (ha-328109)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 12:42:33.447973 1125718 main.go:141] libmachine: (ha-328109)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/ha-328109.rawdisk'/>
	I0318 12:42:33.447981 1125718 main.go:141] libmachine: (ha-328109)       <target dev='hda' bus='virtio'/>
	I0318 12:42:33.447989 1125718 main.go:141] libmachine: (ha-328109)     </disk>
	I0318 12:42:33.448002 1125718 main.go:141] libmachine: (ha-328109)     <interface type='network'>
	I0318 12:42:33.448015 1125718 main.go:141] libmachine: (ha-328109)       <source network='mk-ha-328109'/>
	I0318 12:42:33.448026 1125718 main.go:141] libmachine: (ha-328109)       <model type='virtio'/>
	I0318 12:42:33.448042 1125718 main.go:141] libmachine: (ha-328109)     </interface>
	I0318 12:42:33.448061 1125718 main.go:141] libmachine: (ha-328109)     <interface type='network'>
	I0318 12:42:33.448074 1125718 main.go:141] libmachine: (ha-328109)       <source network='default'/>
	I0318 12:42:33.448084 1125718 main.go:141] libmachine: (ha-328109)       <model type='virtio'/>
	I0318 12:42:33.448093 1125718 main.go:141] libmachine: (ha-328109)     </interface>
	I0318 12:42:33.448103 1125718 main.go:141] libmachine: (ha-328109)     <serial type='pty'>
	I0318 12:42:33.448115 1125718 main.go:141] libmachine: (ha-328109)       <target port='0'/>
	I0318 12:42:33.448125 1125718 main.go:141] libmachine: (ha-328109)     </serial>
	I0318 12:42:33.448154 1125718 main.go:141] libmachine: (ha-328109)     <console type='pty'>
	I0318 12:42:33.448188 1125718 main.go:141] libmachine: (ha-328109)       <target type='serial' port='0'/>
	I0318 12:42:33.448202 1125718 main.go:141] libmachine: (ha-328109)     </console>
	I0318 12:42:33.448214 1125718 main.go:141] libmachine: (ha-328109)     <rng model='virtio'>
	I0318 12:42:33.448228 1125718 main.go:141] libmachine: (ha-328109)       <backend model='random'>/dev/random</backend>
	I0318 12:42:33.448238 1125718 main.go:141] libmachine: (ha-328109)     </rng>
	I0318 12:42:33.448246 1125718 main.go:141] libmachine: (ha-328109)     
	I0318 12:42:33.448255 1125718 main.go:141] libmachine: (ha-328109)     
	I0318 12:42:33.448261 1125718 main.go:141] libmachine: (ha-328109)   </devices>
	I0318 12:42:33.448267 1125718 main.go:141] libmachine: (ha-328109) </domain>
	I0318 12:42:33.448280 1125718 main.go:141] libmachine: (ha-328109) 
	I0318 12:42:33.452728 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:22:36:b5 in network default
	I0318 12:42:33.453339 1125718 main.go:141] libmachine: (ha-328109) Ensuring networks are active...
	I0318 12:42:33.453363 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:33.454088 1125718 main.go:141] libmachine: (ha-328109) Ensuring network default is active
	I0318 12:42:33.454456 1125718 main.go:141] libmachine: (ha-328109) Ensuring network mk-ha-328109 is active
	I0318 12:42:33.454922 1125718 main.go:141] libmachine: (ha-328109) Getting domain xml...
	I0318 12:42:33.455681 1125718 main.go:141] libmachine: (ha-328109) Creating domain...
	I0318 12:42:34.615763 1125718 main.go:141] libmachine: (ha-328109) Waiting to get IP...
	I0318 12:42:34.616795 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:34.617216 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:34.617257 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:34.617201 1125741 retry.go:31] will retry after 279.162867ms: waiting for machine to come up
	I0318 12:42:34.897719 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:34.898195 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:34.898218 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:34.898166 1125741 retry.go:31] will retry after 243.384633ms: waiting for machine to come up
	I0318 12:42:35.143663 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:35.144109 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:35.144136 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:35.144064 1125741 retry.go:31] will retry after 336.699426ms: waiting for machine to come up
	I0318 12:42:35.482738 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:35.483145 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:35.483175 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:35.483112 1125741 retry.go:31] will retry after 562.433686ms: waiting for machine to come up
	I0318 12:42:36.046830 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:36.047255 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:36.047286 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:36.047199 1125741 retry.go:31] will retry after 503.93378ms: waiting for machine to come up
	I0318 12:42:36.553139 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:36.554216 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:36.554265 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:36.554160 1125741 retry.go:31] will retry after 939.355373ms: waiting for machine to come up
	I0318 12:42:37.494846 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:37.495264 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:37.495312 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:37.495221 1125741 retry.go:31] will retry after 1.103667704s: waiting for machine to come up
	I0318 12:42:38.599992 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:38.600441 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:38.600467 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:38.600389 1125741 retry.go:31] will retry after 1.276924143s: waiting for machine to come up
	I0318 12:42:39.878845 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:39.879292 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:39.879325 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:39.879245 1125741 retry.go:31] will retry after 1.648278378s: waiting for machine to come up
	I0318 12:42:41.530396 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:41.530841 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:41.530871 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:41.530780 1125741 retry.go:31] will retry after 1.745965009s: waiting for machine to come up
	I0318 12:42:43.278652 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:43.279091 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:43.279137 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:43.279052 1125741 retry.go:31] will retry after 2.777428365s: waiting for machine to come up
	I0318 12:42:46.058676 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:46.059168 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:46.059194 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:46.059133 1125741 retry.go:31] will retry after 3.40869009s: waiting for machine to come up
	I0318 12:42:49.469432 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:49.469877 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:49.469989 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:49.469870 1125741 retry.go:31] will retry after 3.566417297s: waiting for machine to come up
	I0318 12:42:53.037358 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:53.037800 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:53.037841 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:53.037762 1125741 retry.go:31] will retry after 5.033131353s: waiting for machine to come up
	I0318 12:42:58.072520 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.072957 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has current primary IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.072972 1125718 main.go:141] libmachine: (ha-328109) Found IP for machine: 192.168.39.253
	I0318 12:42:58.072980 1125718 main.go:141] libmachine: (ha-328109) Reserving static IP address...
	I0318 12:42:58.073514 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find host DHCP lease matching {name: "ha-328109", mac: "52:54:00:53:6b:a9", ip: "192.168.39.253"} in network mk-ha-328109
	I0318 12:42:58.145837 1125718 main.go:141] libmachine: (ha-328109) DBG | Getting to WaitForSSH function...
	I0318 12:42:58.145872 1125718 main.go:141] libmachine: (ha-328109) Reserved static IP address: 192.168.39.253
	I0318 12:42:58.145885 1125718 main.go:141] libmachine: (ha-328109) Waiting for SSH to be available...
	I0318 12:42:58.148648 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.149051 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.149075 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.149216 1125718 main.go:141] libmachine: (ha-328109) DBG | Using SSH client type: external
	I0318 12:42:58.149241 1125718 main.go:141] libmachine: (ha-328109) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa (-rw-------)
	I0318 12:42:58.149299 1125718 main.go:141] libmachine: (ha-328109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:42:58.149323 1125718 main.go:141] libmachine: (ha-328109) DBG | About to run SSH command:
	I0318 12:42:58.149346 1125718 main.go:141] libmachine: (ha-328109) DBG | exit 0
	I0318 12:42:58.273026 1125718 main.go:141] libmachine: (ha-328109) DBG | SSH cmd err, output: <nil>: 
	I0318 12:42:58.273298 1125718 main.go:141] libmachine: (ha-328109) KVM machine creation complete!
	I0318 12:42:58.273768 1125718 main.go:141] libmachine: (ha-328109) Calling .GetConfigRaw
	I0318 12:42:58.274300 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:58.274552 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:58.274716 1125718 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 12:42:58.274735 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:42:58.276172 1125718 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 12:42:58.276188 1125718 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 12:42:58.276194 1125718 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 12:42:58.276200 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.278366 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.278730 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.278763 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.278938 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:58.279142 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.279304 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.279439 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:58.279593 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:58.279877 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:58.279892 1125718 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 12:42:58.379768 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:42:58.379793 1125718 main.go:141] libmachine: Detecting the provisioner...
	I0318 12:42:58.379804 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.382812 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.383148 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.383172 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.383331 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:58.383563 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.383729 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.383876 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:58.384006 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:58.384182 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:58.384194 1125718 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 12:42:58.485386 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 12:42:58.485520 1125718 main.go:141] libmachine: found compatible host: buildroot
	I0318 12:42:58.485531 1125718 main.go:141] libmachine: Provisioning with buildroot...
	I0318 12:42:58.485539 1125718 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:42:58.485792 1125718 buildroot.go:166] provisioning hostname "ha-328109"
	I0318 12:42:58.485820 1125718 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:42:58.486080 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.488787 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.489168 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.489199 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.489380 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:58.489562 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.489733 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.489895 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:58.490075 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:58.490294 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:58.490313 1125718 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-328109 && echo "ha-328109" | sudo tee /etc/hostname
	I0318 12:42:58.608020 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-328109
	
	I0318 12:42:58.608058 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.610726 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.611084 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.611125 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.611274 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:58.611476 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.611682 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.611847 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:58.612031 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:58.612262 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:58.612280 1125718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-328109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-328109/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-328109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:42:58.726590 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:42:58.726624 1125718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 12:42:58.726684 1125718 buildroot.go:174] setting up certificates
	I0318 12:42:58.726706 1125718 provision.go:84] configureAuth start
	I0318 12:42:58.726723 1125718 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:42:58.727009 1125718 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:42:58.729588 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.729936 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.729973 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.730146 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.732161 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.732493 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.732516 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.732643 1125718 provision.go:143] copyHostCerts
	I0318 12:42:58.732699 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:42:58.732739 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 12:42:58.732751 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:42:58.732832 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 12:42:58.732959 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:42:58.732986 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 12:42:58.732996 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:42:58.733035 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 12:42:58.733110 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:42:58.733131 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 12:42:58.733140 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:42:58.733176 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 12:42:58.733256 1125718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.ha-328109 san=[127.0.0.1 192.168.39.253 ha-328109 localhost minikube]
	I0318 12:42:58.891821 1125718 provision.go:177] copyRemoteCerts
	I0318 12:42:58.891890 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:42:58.891922 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.894835 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.895175 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.895204 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.895396 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:58.895585 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.895742 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:58.895868 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:42:58.979289 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 12:42:58.979356 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:42:59.007758 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 12:42:59.007836 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0318 12:42:59.033766 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 12:42:59.033836 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 12:42:59.059468 1125718 provision.go:87] duration metric: took 332.748413ms to configureAuth
	I0318 12:42:59.059494 1125718 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:42:59.059651 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:42:59.059795 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:59.062390 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.062748 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.062778 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.062924 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:59.063124 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.063320 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.063491 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:59.063656 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:59.063827 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:59.063851 1125718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 12:42:59.339998 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 12:42:59.340032 1125718 main.go:141] libmachine: Checking connection to Docker...
	I0318 12:42:59.340055 1125718 main.go:141] libmachine: (ha-328109) Calling .GetURL
	I0318 12:42:59.341306 1125718 main.go:141] libmachine: (ha-328109) DBG | Using libvirt version 6000000
	I0318 12:42:59.343425 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.343752 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.343806 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.343929 1125718 main.go:141] libmachine: Docker is up and running!
	I0318 12:42:59.343945 1125718 main.go:141] libmachine: Reticulating splines...
	I0318 12:42:59.343953 1125718 client.go:171] duration metric: took 26.363034911s to LocalClient.Create
	I0318 12:42:59.343987 1125718 start.go:167] duration metric: took 26.363101491s to libmachine.API.Create "ha-328109"
	I0318 12:42:59.343997 1125718 start.go:293] postStartSetup for "ha-328109" (driver="kvm2")
	I0318 12:42:59.344007 1125718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:42:59.344024 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:59.344243 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:42:59.344268 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:59.346277 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.346548 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.346582 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.346699 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:59.346894 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.347072 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:59.347266 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:42:59.427524 1125718 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:42:59.432462 1125718 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:42:59.432499 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 12:42:59.432567 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 12:42:59.432654 1125718 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 12:42:59.432667 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /etc/ssl/certs/11141362.pem
	I0318 12:42:59.432797 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:42:59.442592 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:42:59.469019 1125718 start.go:296] duration metric: took 125.007436ms for postStartSetup
	I0318 12:42:59.469065 1125718 main.go:141] libmachine: (ha-328109) Calling .GetConfigRaw
	I0318 12:42:59.469773 1125718 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:42:59.472478 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.472842 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.472869 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.473167 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:42:59.473395 1125718 start.go:128] duration metric: took 26.510754925s to createHost
	I0318 12:42:59.473423 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:59.475764 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.476083 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.476104 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.476225 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:59.476431 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.476603 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.476743 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:59.476873 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:59.477031 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:59.477047 1125718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:42:59.577227 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710765779.563409318
	
	I0318 12:42:59.577252 1125718 fix.go:216] guest clock: 1710765779.563409318
	I0318 12:42:59.577260 1125718 fix.go:229] Guest: 2024-03-18 12:42:59.563409318 +0000 UTC Remote: 2024-03-18 12:42:59.473409893 +0000 UTC m=+26.630089998 (delta=89.999425ms)
	I0318 12:42:59.577308 1125718 fix.go:200] guest clock delta is within tolerance: 89.999425ms
	I0318 12:42:59.577317 1125718 start.go:83] releasing machines lock for "ha-328109", held for 26.614764446s
	I0318 12:42:59.577342 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:59.577641 1125718 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:42:59.580162 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.580574 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.580601 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.580810 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:59.581276 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:59.581469 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:59.581591 1125718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:42:59.581637 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:59.581651 1125718 ssh_runner.go:195] Run: cat /version.json
	I0318 12:42:59.581681 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:59.584224 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.584414 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.584656 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.584684 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.584778 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:59.584806 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.584830 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.584953 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.585016 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:59.585184 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.585198 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:59.585374 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:59.585372 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:42:59.585507 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:42:59.683378 1125718 ssh_runner.go:195] Run: systemctl --version
	I0318 12:42:59.689815 1125718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 12:42:59.848150 1125718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 12:42:59.855282 1125718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:42:59.855355 1125718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:42:59.872299 1125718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:42:59.872320 1125718 start.go:494] detecting cgroup driver to use...
	I0318 12:42:59.872396 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:42:59.890688 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:42:59.905298 1125718 docker.go:217] disabling cri-docker service (if available) ...
	I0318 12:42:59.905355 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 12:42:59.919060 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 12:42:59.932778 1125718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 12:43:00.049114 1125718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 12:43:00.182331 1125718 docker.go:233] disabling docker service ...
	I0318 12:43:00.182396 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 12:43:00.198331 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 12:43:00.212991 1125718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 12:43:00.348866 1125718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 12:43:00.469879 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 12:43:00.485742 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:43:00.506025 1125718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 12:43:00.506083 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:43:00.517952 1125718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 12:43:00.518013 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:43:00.530178 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:43:00.541859 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:43:00.553792 1125718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:43:00.565862 1125718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:43:00.576407 1125718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 12:43:00.576451 1125718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 12:43:00.590759 1125718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:43:00.601582 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:00.718655 1125718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 12:43:00.870021 1125718 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 12:43:00.870091 1125718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 12:43:00.877167 1125718 start.go:562] Will wait 60s for crictl version
	I0318 12:43:00.877236 1125718 ssh_runner.go:195] Run: which crictl
	I0318 12:43:00.881823 1125718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:43:00.923854 1125718 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 12:43:00.923930 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:43:00.955517 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:43:00.988604 1125718 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 12:43:00.990186 1125718 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:43:00.992525 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:00.992824 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:43:00.992853 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:00.993022 1125718 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 12:43:00.997695 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:43:01.011699 1125718 kubeadm.go:877] updating cluster {Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 12:43:01.011827 1125718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:43:01.011892 1125718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:43:01.047347 1125718 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 12:43:01.047437 1125718 ssh_runner.go:195] Run: which lz4
	I0318 12:43:01.051747 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 12:43:01.051842 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 12:43:01.056408 1125718 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 12:43:01.056446 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 12:43:02.876595 1125718 crio.go:444] duration metric: took 1.82478261s to copy over tarball
	I0318 12:43:02.876680 1125718 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 12:43:05.445107 1125718 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.568390906s)
	I0318 12:43:05.445141 1125718 crio.go:451] duration metric: took 2.568510194s to extract the tarball
	I0318 12:43:05.445151 1125718 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 12:43:05.488343 1125718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:43:05.538446 1125718 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 12:43:05.538475 1125718 cache_images.go:84] Images are preloaded, skipping loading
	I0318 12:43:05.538484 1125718 kubeadm.go:928] updating node { 192.168.39.253 8443 v1.28.4 crio true true} ...
	I0318 12:43:05.538616 1125718 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-328109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:43:05.538696 1125718 ssh_runner.go:195] Run: crio config
	I0318 12:43:05.588974 1125718 cni.go:84] Creating CNI manager for ""
	I0318 12:43:05.589000 1125718 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 12:43:05.589012 1125718 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 12:43:05.589038 1125718 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.253 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-328109 NodeName:ha-328109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 12:43:05.589267 1125718 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-328109"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 12:43:05.589299 1125718 kube-vip.go:111] generating kube-vip config ...
	I0318 12:43:05.589345 1125718 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 12:43:05.607828 1125718 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 12:43:05.607991 1125718 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 12:43:05.608051 1125718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:43:05.619777 1125718 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 12:43:05.619841 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 12:43:05.630602 1125718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0318 12:43:05.648883 1125718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:43:05.666806 1125718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0318 12:43:05.684911 1125718 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 12:43:05.702918 1125718 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 12:43:05.707333 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:43:05.721730 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:05.844199 1125718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:43:05.865494 1125718 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109 for IP: 192.168.39.253
	I0318 12:43:05.865521 1125718 certs.go:194] generating shared ca certs ...
	I0318 12:43:05.865541 1125718 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:05.865749 1125718 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 12:43:05.865833 1125718 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 12:43:05.865854 1125718 certs.go:256] generating profile certs ...
	I0318 12:43:05.865939 1125718 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key
	I0318 12:43:05.865958 1125718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt with IP's: []
	I0318 12:43:06.059925 1125718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt ...
	I0318 12:43:06.059957 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt: {Name:mk98d1028bb046ec14cfc2db8eaed8adeb0938fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.060157 1125718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key ...
	I0318 12:43:06.060172 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key: {Name:mkad7c16b97c067b718bfe3b7a476b91257e5668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.060295 1125718 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.06b165d1
	I0318 12:43:06.060322 1125718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.06b165d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.253 192.168.39.254]
	I0318 12:43:06.137070 1125718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.06b165d1 ...
	I0318 12:43:06.137102 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.06b165d1: {Name:mk3e37e6b5fb439da6c5ece9a6decbb4962ddeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.137279 1125718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.06b165d1 ...
	I0318 12:43:06.137301 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.06b165d1: {Name:mk97eab05a308922396449b4f891c0c3075c0118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.137396 1125718 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.06b165d1 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt
	I0318 12:43:06.137521 1125718 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.06b165d1 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key
	I0318 12:43:06.137607 1125718 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key
	I0318 12:43:06.137626 1125718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt with IP's: []
	I0318 12:43:06.201657 1125718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt ...
	I0318 12:43:06.201692 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt: {Name:mkf1fee34716d4ec97d785b76997dc5ca77c33e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.201908 1125718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key ...
	I0318 12:43:06.201926 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key: {Name:mkbe491bf5b0ea170f6d25c9f206dd2996a733e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.202029 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:43:06.202055 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:43:06.202077 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:43:06.202100 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:43:06.202117 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:43:06.202130 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:43:06.202146 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:43:06.202165 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:43:06.202232 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 12:43:06.202290 1125718 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 12:43:06.202304 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 12:43:06.202345 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 12:43:06.202374 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 12:43:06.202403 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 12:43:06.202459 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:43:06.202498 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /usr/share/ca-certificates/11141362.pem
	I0318 12:43:06.202518 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:06.202536 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem -> /usr/share/ca-certificates/1114136.pem
	I0318 12:43:06.203187 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:43:06.231563 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:43:06.259372 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:43:06.286691 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:43:06.313297 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 12:43:06.342092 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 12:43:06.368547 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:43:06.395181 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 12:43:06.422955 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 12:43:06.449465 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:43:06.476590 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 12:43:06.503893 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 12:43:06.524061 1125718 ssh_runner.go:195] Run: openssl version
	I0318 12:43:06.530679 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 12:43:06.544560 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 12:43:06.550018 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 12:43:06.550065 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 12:43:06.556606 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:43:06.570092 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:43:06.582834 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:06.588037 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:06.588086 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:06.594336 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:43:06.607303 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 12:43:06.620246 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 12:43:06.625361 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 12:43:06.625413 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 12:43:06.631604 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 12:43:06.643973 1125718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:43:06.648604 1125718 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:43:06.648662 1125718 kubeadm.go:391] StartCluster: {Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:43:06.648748 1125718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 12:43:06.648828 1125718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 12:43:06.700166 1125718 cri.go:89] found id: ""
	I0318 12:43:06.700240 1125718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 12:43:06.719130 1125718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 12:43:06.735317 1125718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 12:43:06.751003 1125718 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:43:06.751024 1125718 kubeadm.go:156] found existing configuration files:
	
	I0318 12:43:06.751065 1125718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 12:43:06.761703 1125718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:43:06.761748 1125718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 12:43:06.772582 1125718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 12:43:06.783256 1125718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:43:06.783310 1125718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 12:43:06.794478 1125718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 12:43:06.805372 1125718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:43:06.805430 1125718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 12:43:06.817218 1125718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 12:43:06.826995 1125718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:43:06.827050 1125718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 12:43:06.837914 1125718 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 12:43:06.948251 1125718 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 12:43:06.948313 1125718 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 12:43:07.085088 1125718 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 12:43:07.085240 1125718 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 12:43:07.085364 1125718 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 12:43:07.307399 1125718 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 12:43:07.415769 1125718 out.go:204]   - Generating certificates and keys ...
	I0318 12:43:07.415882 1125718 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 12:43:07.415963 1125718 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 12:43:07.548702 1125718 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 12:43:07.595062 1125718 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 12:43:07.842592 1125718 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 12:43:07.910806 1125718 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 12:43:08.058724 1125718 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 12:43:08.058857 1125718 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-328109 localhost] and IPs [192.168.39.253 127.0.0.1 ::1]
	I0318 12:43:08.280941 1125718 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 12:43:08.281223 1125718 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-328109 localhost] and IPs [192.168.39.253 127.0.0.1 ::1]
	I0318 12:43:08.675729 1125718 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 12:43:08.848717 1125718 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 12:43:08.915219 1125718 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 12:43:08.915399 1125718 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 12:43:09.279825 1125718 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 12:43:09.339098 1125718 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 12:43:09.494758 1125718 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 12:43:09.734925 1125718 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 12:43:09.736742 1125718 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 12:43:09.742603 1125718 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 12:43:09.744602 1125718 out.go:204]   - Booting up control plane ...
	I0318 12:43:09.744708 1125718 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 12:43:09.744800 1125718 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 12:43:09.744857 1125718 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 12:43:09.763160 1125718 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:43:09.763939 1125718 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:43:09.763985 1125718 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 12:43:09.915668 1125718 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 12:43:19.535392 1125718 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.620407 seconds
	I0318 12:43:19.535517 1125718 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 12:43:19.562002 1125718 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 12:43:20.114490 1125718 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 12:43:20.114731 1125718 kubeadm.go:309] [mark-control-plane] Marking the node ha-328109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 12:43:20.630454 1125718 kubeadm.go:309] [bootstrap-token] Using token: fi8sec.f0o3w4sfps43kmi2
	I0318 12:43:20.632029 1125718 out.go:204]   - Configuring RBAC rules ...
	I0318 12:43:20.632153 1125718 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 12:43:20.638344 1125718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 12:43:20.648575 1125718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 12:43:20.652191 1125718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 12:43:20.655760 1125718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 12:43:20.660143 1125718 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 12:43:20.716031 1125718 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 12:43:20.970658 1125718 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 12:43:21.081598 1125718 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 12:43:21.083185 1125718 kubeadm.go:309] 
	I0318 12:43:21.083260 1125718 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 12:43:21.083271 1125718 kubeadm.go:309] 
	I0318 12:43:21.083374 1125718 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 12:43:21.083401 1125718 kubeadm.go:309] 
	I0318 12:43:21.083441 1125718 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 12:43:21.083516 1125718 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 12:43:21.083598 1125718 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 12:43:21.083613 1125718 kubeadm.go:309] 
	I0318 12:43:21.083715 1125718 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 12:43:21.083734 1125718 kubeadm.go:309] 
	I0318 12:43:21.083825 1125718 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 12:43:21.083845 1125718 kubeadm.go:309] 
	I0318 12:43:21.083934 1125718 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 12:43:21.084053 1125718 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 12:43:21.084167 1125718 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 12:43:21.084185 1125718 kubeadm.go:309] 
	I0318 12:43:21.084319 1125718 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 12:43:21.084453 1125718 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 12:43:21.084463 1125718 kubeadm.go:309] 
	I0318 12:43:21.084553 1125718 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fi8sec.f0o3w4sfps43kmi2 \
	I0318 12:43:21.084688 1125718 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 12:43:21.084722 1125718 kubeadm.go:309] 	--control-plane 
	I0318 12:43:21.084731 1125718 kubeadm.go:309] 
	I0318 12:43:21.084852 1125718 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 12:43:21.084862 1125718 kubeadm.go:309] 
	I0318 12:43:21.084960 1125718 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fi8sec.f0o3w4sfps43kmi2 \
	I0318 12:43:21.085105 1125718 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 12:43:21.086261 1125718 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 12:43:21.086293 1125718 cni.go:84] Creating CNI manager for ""
	I0318 12:43:21.086307 1125718 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 12:43:21.088108 1125718 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 12:43:21.089501 1125718 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 12:43:21.112180 1125718 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 12:43:21.112203 1125718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 12:43:21.199282 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 12:43:22.196147 1125718 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 12:43:22.196247 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:22.196247 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-328109 minikube.k8s.io/updated_at=2024_03_18T12_43_22_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-328109 minikube.k8s.io/primary=true
	I0318 12:43:22.210588 1125718 ops.go:34] apiserver oom_adj: -16
	I0318 12:43:22.379332 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:22.879356 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:23.379974 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:23.880167 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:24.379327 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:24.879818 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:25.380309 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:25.880218 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:26.379974 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:26.879374 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:27.380212 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:27.879608 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:28.379586 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:28.879340 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:29.379342 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:29.880361 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:30.379853 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:30.879547 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:31.379737 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:31.507187 1125718 kubeadm.go:1107] duration metric: took 9.3110211s to wait for elevateKubeSystemPrivileges
	W0318 12:43:31.507230 1125718 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 12:43:31.507240 1125718 kubeadm.go:393] duration metric: took 24.85858693s to StartCluster
	I0318 12:43:31.507264 1125718 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:31.507355 1125718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:43:31.508126 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:31.508398 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 12:43:31.508417 1125718 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 12:43:31.508486 1125718 addons.go:69] Setting storage-provisioner=true in profile "ha-328109"
	I0318 12:43:31.508513 1125718 addons.go:234] Setting addon storage-provisioner=true in "ha-328109"
	I0318 12:43:31.508532 1125718 addons.go:69] Setting default-storageclass=true in profile "ha-328109"
	I0318 12:43:31.508574 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:43:31.508603 1125718 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-328109"
	I0318 12:43:31.508671 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:43:31.508389 1125718 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:43:31.508727 1125718 start.go:240] waiting for startup goroutines ...
	I0318 12:43:31.509020 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:31.509040 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:31.509070 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:31.509224 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:31.524299 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0318 12:43:31.524409 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0318 12:43:31.524812 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:31.524868 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:31.525347 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:31.525369 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:31.525507 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:31.525533 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:31.525828 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:31.525853 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:31.526065 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:43:31.526360 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:31.526400 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:31.528502 1125718 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:43:31.528879 1125718 kapi.go:59] client config for ha-328109: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt", KeyFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key", CAFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:43:31.529504 1125718 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 12:43:31.529773 1125718 addons.go:234] Setting addon default-storageclass=true in "ha-328109"
	I0318 12:43:31.529821 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:43:31.530209 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:31.530256 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:31.542348 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0318 12:43:31.542880 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:31.543576 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:31.543600 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:31.544037 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:31.544253 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:43:31.545543 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0318 12:43:31.545963 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:31.546406 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:43:31.546501 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:31.546520 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:31.548743 1125718 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:43:31.546859 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:31.550221 1125718 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:43:31.550245 1125718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 12:43:31.550264 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:43:31.550507 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:31.550543 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:31.553144 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:31.553615 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:43:31.553646 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:31.553759 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:43:31.553931 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:43:31.554100 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:43:31.554212 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:43:31.565892 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0318 12:43:31.566251 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:31.566676 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:31.566696 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:31.566999 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:31.567188 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:43:31.568775 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:43:31.569037 1125718 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 12:43:31.569053 1125718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 12:43:31.569067 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:43:31.571988 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:31.572402 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:43:31.572441 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:31.572579 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:43:31.572763 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:43:31.572919 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:43:31.573063 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:43:31.702231 1125718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:43:31.727244 1125718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 12:43:31.734958 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 12:43:32.686170 1125718 main.go:141] libmachine: Making call to close driver server
	I0318 12:43:32.686194 1125718 main.go:141] libmachine: (ha-328109) Calling .Close
	I0318 12:43:32.686225 1125718 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 12:43:32.686286 1125718 main.go:141] libmachine: Making call to close driver server
	I0318 12:43:32.686308 1125718 main.go:141] libmachine: (ha-328109) Calling .Close
	I0318 12:43:32.686534 1125718 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:43:32.686551 1125718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:43:32.686560 1125718 main.go:141] libmachine: Making call to close driver server
	I0318 12:43:32.686568 1125718 main.go:141] libmachine: (ha-328109) Calling .Close
	I0318 12:43:32.686667 1125718 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:43:32.686685 1125718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:43:32.686694 1125718 main.go:141] libmachine: Making call to close driver server
	I0318 12:43:32.686708 1125718 main.go:141] libmachine: (ha-328109) Calling .Close
	I0318 12:43:32.686872 1125718 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:43:32.686886 1125718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:43:32.686998 1125718 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 12:43:32.687015 1125718 round_trippers.go:469] Request Headers:
	I0318 12:43:32.687025 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:32.687030 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:43:32.687040 1125718 main.go:141] libmachine: (ha-328109) DBG | Closing plugin on server side
	I0318 12:43:32.687109 1125718 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:43:32.687149 1125718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:43:32.699529 1125718 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0318 12:43:32.700139 1125718 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 12:43:32.700157 1125718 round_trippers.go:469] Request Headers:
	I0318 12:43:32.700164 1125718 round_trippers.go:473]     Content-Type: application/json
	I0318 12:43:32.700167 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:43:32.700169 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:32.702859 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:43:32.703005 1125718 main.go:141] libmachine: Making call to close driver server
	I0318 12:43:32.703017 1125718 main.go:141] libmachine: (ha-328109) Calling .Close
	I0318 12:43:32.703293 1125718 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:43:32.703326 1125718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:43:32.705165 1125718 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 12:43:32.706458 1125718 addons.go:505] duration metric: took 1.198043636s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 12:43:32.706490 1125718 start.go:245] waiting for cluster config update ...
	I0318 12:43:32.706501 1125718 start.go:254] writing updated cluster config ...
	I0318 12:43:32.708205 1125718 out.go:177] 
	I0318 12:43:32.709707 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:43:32.709776 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:43:32.711333 1125718 out.go:177] * Starting "ha-328109-m02" control-plane node in "ha-328109" cluster
	I0318 12:43:32.712448 1125718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:43:32.712474 1125718 cache.go:56] Caching tarball of preloaded images
	I0318 12:43:32.712584 1125718 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 12:43:32.712600 1125718 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 12:43:32.712676 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:43:32.712846 1125718 start.go:360] acquireMachinesLock for ha-328109-m02: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:43:32.712887 1125718 start.go:364] duration metric: took 23.508µs to acquireMachinesLock for "ha-328109-m02"
	I0318 12:43:32.712907 1125718 start.go:93] Provisioning new machine with config: &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:43:32.712972 1125718 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0318 12:43:32.714457 1125718 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 12:43:32.714536 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:32.714572 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:32.729074 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0318 12:43:32.729506 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:32.729973 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:32.729995 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:32.730340 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:32.730540 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetMachineName
	I0318 12:43:32.730708 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:32.730861 1125718 start.go:159] libmachine.API.Create for "ha-328109" (driver="kvm2")
	I0318 12:43:32.730892 1125718 client.go:168] LocalClient.Create starting
	I0318 12:43:32.730921 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem
	I0318 12:43:32.730960 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:43:32.730979 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:43:32.731046 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem
	I0318 12:43:32.731070 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:43:32.731088 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:43:32.731116 1125718 main.go:141] libmachine: Running pre-create checks...
	I0318 12:43:32.731128 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .PreCreateCheck
	I0318 12:43:32.731316 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetConfigRaw
	I0318 12:43:32.731704 1125718 main.go:141] libmachine: Creating machine...
	I0318 12:43:32.731720 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .Create
	I0318 12:43:32.731875 1125718 main.go:141] libmachine: (ha-328109-m02) Creating KVM machine...
	I0318 12:43:32.733153 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found existing default KVM network
	I0318 12:43:32.733356 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found existing private KVM network mk-ha-328109
	I0318 12:43:32.733514 1125718 main.go:141] libmachine: (ha-328109-m02) Setting up store path in /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02 ...
	I0318 12:43:32.733543 1125718 main.go:141] libmachine: (ha-328109-m02) Building disk image from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 12:43:32.733589 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:32.733486 1126085 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:43:32.733788 1125718 main.go:141] libmachine: (ha-328109-m02) Downloading /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:43:32.986625 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:32.986490 1126085 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa...
	I0318 12:43:33.068219 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:33.068080 1126085 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/ha-328109-m02.rawdisk...
	I0318 12:43:33.068258 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Writing magic tar header
	I0318 12:43:33.068272 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Writing SSH key tar header
	I0318 12:43:33.068284 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:33.068215 1126085 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02 ...
	I0318 12:43:33.068390 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02
	I0318 12:43:33.068435 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02 (perms=drwx------)
	I0318 12:43:33.068449 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines
	I0318 12:43:33.068471 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:43:33.068480 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816
	I0318 12:43:33.068490 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 12:43:33.068507 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins
	I0318 12:43:33.068518 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines (perms=drwxr-xr-x)
	I0318 12:43:33.068531 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube (perms=drwxr-xr-x)
	I0318 12:43:33.068545 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816 (perms=drwxrwxr-x)
	I0318 12:43:33.068557 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 12:43:33.068568 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home
	I0318 12:43:33.068592 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 12:43:33.068608 1125718 main.go:141] libmachine: (ha-328109-m02) Creating domain...
	I0318 12:43:33.068621 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Skipping /home - not owner
	I0318 12:43:33.069682 1125718 main.go:141] libmachine: (ha-328109-m02) define libvirt domain using xml: 
	I0318 12:43:33.069706 1125718 main.go:141] libmachine: (ha-328109-m02) <domain type='kvm'>
	I0318 12:43:33.069717 1125718 main.go:141] libmachine: (ha-328109-m02)   <name>ha-328109-m02</name>
	I0318 12:43:33.069730 1125718 main.go:141] libmachine: (ha-328109-m02)   <memory unit='MiB'>2200</memory>
	I0318 12:43:33.069742 1125718 main.go:141] libmachine: (ha-328109-m02)   <vcpu>2</vcpu>
	I0318 12:43:33.069753 1125718 main.go:141] libmachine: (ha-328109-m02)   <features>
	I0318 12:43:33.069761 1125718 main.go:141] libmachine: (ha-328109-m02)     <acpi/>
	I0318 12:43:33.069770 1125718 main.go:141] libmachine: (ha-328109-m02)     <apic/>
	I0318 12:43:33.069778 1125718 main.go:141] libmachine: (ha-328109-m02)     <pae/>
	I0318 12:43:33.069788 1125718 main.go:141] libmachine: (ha-328109-m02)     
	I0318 12:43:33.069796 1125718 main.go:141] libmachine: (ha-328109-m02)   </features>
	I0318 12:43:33.069810 1125718 main.go:141] libmachine: (ha-328109-m02)   <cpu mode='host-passthrough'>
	I0318 12:43:33.069836 1125718 main.go:141] libmachine: (ha-328109-m02)   
	I0318 12:43:33.069855 1125718 main.go:141] libmachine: (ha-328109-m02)   </cpu>
	I0318 12:43:33.069905 1125718 main.go:141] libmachine: (ha-328109-m02)   <os>
	I0318 12:43:33.069932 1125718 main.go:141] libmachine: (ha-328109-m02)     <type>hvm</type>
	I0318 12:43:33.069943 1125718 main.go:141] libmachine: (ha-328109-m02)     <boot dev='cdrom'/>
	I0318 12:43:33.069953 1125718 main.go:141] libmachine: (ha-328109-m02)     <boot dev='hd'/>
	I0318 12:43:33.069967 1125718 main.go:141] libmachine: (ha-328109-m02)     <bootmenu enable='no'/>
	I0318 12:43:33.069977 1125718 main.go:141] libmachine: (ha-328109-m02)   </os>
	I0318 12:43:33.069987 1125718 main.go:141] libmachine: (ha-328109-m02)   <devices>
	I0318 12:43:33.070017 1125718 main.go:141] libmachine: (ha-328109-m02)     <disk type='file' device='cdrom'>
	I0318 12:43:33.070033 1125718 main.go:141] libmachine: (ha-328109-m02)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/boot2docker.iso'/>
	I0318 12:43:33.070047 1125718 main.go:141] libmachine: (ha-328109-m02)       <target dev='hdc' bus='scsi'/>
	I0318 12:43:33.070058 1125718 main.go:141] libmachine: (ha-328109-m02)       <readonly/>
	I0318 12:43:33.070069 1125718 main.go:141] libmachine: (ha-328109-m02)     </disk>
	I0318 12:43:33.070080 1125718 main.go:141] libmachine: (ha-328109-m02)     <disk type='file' device='disk'>
	I0318 12:43:33.070093 1125718 main.go:141] libmachine: (ha-328109-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 12:43:33.070108 1125718 main.go:141] libmachine: (ha-328109-m02)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/ha-328109-m02.rawdisk'/>
	I0318 12:43:33.070119 1125718 main.go:141] libmachine: (ha-328109-m02)       <target dev='hda' bus='virtio'/>
	I0318 12:43:33.070139 1125718 main.go:141] libmachine: (ha-328109-m02)     </disk>
	I0318 12:43:33.070161 1125718 main.go:141] libmachine: (ha-328109-m02)     <interface type='network'>
	I0318 12:43:33.070171 1125718 main.go:141] libmachine: (ha-328109-m02)       <source network='mk-ha-328109'/>
	I0318 12:43:33.070182 1125718 main.go:141] libmachine: (ha-328109-m02)       <model type='virtio'/>
	I0318 12:43:33.070194 1125718 main.go:141] libmachine: (ha-328109-m02)     </interface>
	I0318 12:43:33.070205 1125718 main.go:141] libmachine: (ha-328109-m02)     <interface type='network'>
	I0318 12:43:33.070219 1125718 main.go:141] libmachine: (ha-328109-m02)       <source network='default'/>
	I0318 12:43:33.070226 1125718 main.go:141] libmachine: (ha-328109-m02)       <model type='virtio'/>
	I0318 12:43:33.070251 1125718 main.go:141] libmachine: (ha-328109-m02)     </interface>
	I0318 12:43:33.070274 1125718 main.go:141] libmachine: (ha-328109-m02)     <serial type='pty'>
	I0318 12:43:33.070287 1125718 main.go:141] libmachine: (ha-328109-m02)       <target port='0'/>
	I0318 12:43:33.070297 1125718 main.go:141] libmachine: (ha-328109-m02)     </serial>
	I0318 12:43:33.070306 1125718 main.go:141] libmachine: (ha-328109-m02)     <console type='pty'>
	I0318 12:43:33.070318 1125718 main.go:141] libmachine: (ha-328109-m02)       <target type='serial' port='0'/>
	I0318 12:43:33.070328 1125718 main.go:141] libmachine: (ha-328109-m02)     </console>
	I0318 12:43:33.070335 1125718 main.go:141] libmachine: (ha-328109-m02)     <rng model='virtio'>
	I0318 12:43:33.070348 1125718 main.go:141] libmachine: (ha-328109-m02)       <backend model='random'>/dev/random</backend>
	I0318 12:43:33.070362 1125718 main.go:141] libmachine: (ha-328109-m02)     </rng>
	I0318 12:43:33.070370 1125718 main.go:141] libmachine: (ha-328109-m02)     
	I0318 12:43:33.070379 1125718 main.go:141] libmachine: (ha-328109-m02)     
	I0318 12:43:33.070388 1125718 main.go:141] libmachine: (ha-328109-m02)   </devices>
	I0318 12:43:33.070397 1125718 main.go:141] libmachine: (ha-328109-m02) </domain>
	I0318 12:43:33.070409 1125718 main.go:141] libmachine: (ha-328109-m02) 
	I0318 12:43:33.077496 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:cb:19:9f in network default
	I0318 12:43:33.078120 1125718 main.go:141] libmachine: (ha-328109-m02) Ensuring networks are active...
	I0318 12:43:33.078147 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:33.078924 1125718 main.go:141] libmachine: (ha-328109-m02) Ensuring network default is active
	I0318 12:43:33.079244 1125718 main.go:141] libmachine: (ha-328109-m02) Ensuring network mk-ha-328109 is active
	I0318 12:43:33.079670 1125718 main.go:141] libmachine: (ha-328109-m02) Getting domain xml...
	I0318 12:43:33.080417 1125718 main.go:141] libmachine: (ha-328109-m02) Creating domain...
	I0318 12:43:34.270519 1125718 main.go:141] libmachine: (ha-328109-m02) Waiting to get IP...
	I0318 12:43:34.271515 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:34.271960 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:34.272042 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:34.271943 1126085 retry.go:31] will retry after 217.561939ms: waiting for machine to come up
	I0318 12:43:34.491422 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:34.491931 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:34.491961 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:34.491888 1126085 retry.go:31] will retry after 331.528679ms: waiting for machine to come up
	I0318 12:43:34.825355 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:34.825869 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:34.825902 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:34.825819 1126085 retry.go:31] will retry after 333.550695ms: waiting for machine to come up
	I0318 12:43:35.161311 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:35.161753 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:35.161780 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:35.161669 1126085 retry.go:31] will retry after 412.760783ms: waiting for machine to come up
	I0318 12:43:35.576353 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:35.576818 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:35.576860 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:35.576756 1126085 retry.go:31] will retry after 592.586387ms: waiting for machine to come up
	I0318 12:43:36.170720 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:36.171261 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:36.171288 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:36.171227 1126085 retry.go:31] will retry after 796.14891ms: waiting for machine to come up
	I0318 12:43:36.969073 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:36.969526 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:36.969558 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:36.969475 1126085 retry.go:31] will retry after 1.038014819s: waiting for machine to come up
	I0318 12:43:38.008945 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:38.009370 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:38.009403 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:38.009329 1126085 retry.go:31] will retry after 1.268175144s: waiting for machine to come up
	I0318 12:43:39.279858 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:39.280351 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:39.280385 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:39.280304 1126085 retry.go:31] will retry after 1.56218765s: waiting for machine to come up
	I0318 12:43:40.845119 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:40.845518 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:40.845543 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:40.845472 1126085 retry.go:31] will retry after 2.041106676s: waiting for machine to come up
	I0318 12:43:42.888092 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:42.888602 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:42.888637 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:42.888540 1126085 retry.go:31] will retry after 1.790770419s: waiting for machine to come up
	I0318 12:43:44.681508 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:44.682058 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:44.682090 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:44.682013 1126085 retry.go:31] will retry after 2.583742639s: waiting for machine to come up
	I0318 12:43:47.268831 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:47.269314 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:47.269346 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:47.269239 1126085 retry.go:31] will retry after 3.343018853s: waiting for machine to come up
	I0318 12:43:50.615998 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:50.616403 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:50.616428 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:50.616358 1126085 retry.go:31] will retry after 4.746728365s: waiting for machine to come up
	I0318 12:43:55.366283 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:55.366789 1125718 main.go:141] libmachine: (ha-328109-m02) Found IP for machine: 192.168.39.246
	I0318 12:43:55.366830 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has current primary IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:55.366836 1125718 main.go:141] libmachine: (ha-328109-m02) Reserving static IP address...
	I0318 12:43:55.367161 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find host DHCP lease matching {name: "ha-328109-m02", mac: "52:54:00:8c:b0:42", ip: "192.168.39.246"} in network mk-ha-328109
	I0318 12:43:55.441786 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Getting to WaitForSSH function...
	I0318 12:43:55.441829 1125718 main.go:141] libmachine: (ha-328109-m02) Reserved static IP address: 192.168.39.246
	I0318 12:43:55.441863 1125718 main.go:141] libmachine: (ha-328109-m02) Waiting for SSH to be available...
	I0318 12:43:55.444551 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:55.445016 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109
	I0318 12:43:55.445047 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find defined IP address of network mk-ha-328109 interface with MAC address 52:54:00:8c:b0:42
	I0318 12:43:55.445157 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Using SSH client type: external
	I0318 12:43:55.445200 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa (-rw-------)
	I0318 12:43:55.445235 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:43:55.445250 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | About to run SSH command:
	I0318 12:43:55.445277 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | exit 0
	I0318 12:43:55.448798 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | SSH cmd err, output: exit status 255: 
	I0318 12:43:55.448821 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0318 12:43:55.448828 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | command : exit 0
	I0318 12:43:55.448833 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | err     : exit status 255
	I0318 12:43:55.448845 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | output  : 
	I0318 12:43:58.449369 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Getting to WaitForSSH function...
	I0318 12:43:58.452205 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.452685 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:58.452724 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.452851 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Using SSH client type: external
	I0318 12:43:58.452880 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa (-rw-------)
	I0318 12:43:58.452918 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:43:58.452930 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | About to run SSH command:
	I0318 12:43:58.452945 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | exit 0
	I0318 12:43:58.580414 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | SSH cmd err, output: <nil>: 
	I0318 12:43:58.580656 1125718 main.go:141] libmachine: (ha-328109-m02) KVM machine creation complete!
	I0318 12:43:58.581324 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetConfigRaw
	I0318 12:43:58.581918 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:58.582151 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:58.582359 1125718 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 12:43:58.582374 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:43:58.583601 1125718 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 12:43:58.583615 1125718 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 12:43:58.583621 1125718 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 12:43:58.583626 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:58.585891 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.586214 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:58.586237 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.586367 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:58.586545 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.586714 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.586866 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:58.587053 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:58.587334 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:58.587350 1125718 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 12:43:58.699683 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:43:58.699711 1125718 main.go:141] libmachine: Detecting the provisioner...
	I0318 12:43:58.699719 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:58.702567 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.702974 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:58.703003 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.703170 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:58.703387 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.703565 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.703681 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:58.703904 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:58.704084 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:58.704096 1125718 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 12:43:58.818089 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 12:43:58.818186 1125718 main.go:141] libmachine: found compatible host: buildroot
	I0318 12:43:58.818196 1125718 main.go:141] libmachine: Provisioning with buildroot...
	I0318 12:43:58.818204 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetMachineName
	I0318 12:43:58.818569 1125718 buildroot.go:166] provisioning hostname "ha-328109-m02"
	I0318 12:43:58.818600 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetMachineName
	I0318 12:43:58.818843 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:58.822672 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.823042 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:58.823073 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.823212 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:58.823436 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.823604 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.823736 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:58.823940 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:58.824142 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:58.824170 1125718 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-328109-m02 && echo "ha-328109-m02" | sudo tee /etc/hostname
	I0318 12:43:58.951811 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-328109-m02
	
	I0318 12:43:58.951850 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:58.954600 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.955009 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:58.955043 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.955214 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:58.955446 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.955594 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.955701 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:58.955835 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:58.956041 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:58.956067 1125718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-328109-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-328109-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-328109-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:43:59.078710 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:43:59.078758 1125718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 12:43:59.078784 1125718 buildroot.go:174] setting up certificates
	I0318 12:43:59.078799 1125718 provision.go:84] configureAuth start
	I0318 12:43:59.078817 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetMachineName
	I0318 12:43:59.079120 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:43:59.082111 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.082579 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.082610 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.082758 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.085173 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.085539 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.085568 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.085721 1125718 provision.go:143] copyHostCerts
	I0318 12:43:59.085758 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:43:59.085808 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 12:43:59.085822 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:43:59.085923 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 12:43:59.086039 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:43:59.086066 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 12:43:59.086075 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:43:59.086115 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 12:43:59.086194 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:43:59.086221 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 12:43:59.086227 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:43:59.086264 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 12:43:59.086350 1125718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.ha-328109-m02 san=[127.0.0.1 192.168.39.246 ha-328109-m02 localhost minikube]
	I0318 12:43:59.164641 1125718 provision.go:177] copyRemoteCerts
	I0318 12:43:59.164719 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:43:59.164752 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.167335 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.167761 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.167800 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.167941 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.168138 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.168266 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.168392 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	I0318 12:43:59.256598 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 12:43:59.256686 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:43:59.284362 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 12:43:59.284460 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 12:43:59.310409 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 12:43:59.310498 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 12:43:59.336202 1125718 provision.go:87] duration metric: took 257.380191ms to configureAuth
	I0318 12:43:59.336242 1125718 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:43:59.336462 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:43:59.336584 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.339229 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.339572 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.339607 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.339859 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.340064 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.340234 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.340378 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.340538 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:59.340707 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:59.340727 1125718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 12:43:59.627029 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 12:43:59.627063 1125718 main.go:141] libmachine: Checking connection to Docker...
	I0318 12:43:59.627071 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetURL
	I0318 12:43:59.628413 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Using libvirt version 6000000
	I0318 12:43:59.630858 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.631225 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.631268 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.631491 1125718 main.go:141] libmachine: Docker is up and running!
	I0318 12:43:59.631511 1125718 main.go:141] libmachine: Reticulating splines...
	I0318 12:43:59.631519 1125718 client.go:171] duration metric: took 26.900616699s to LocalClient.Create
	I0318 12:43:59.631542 1125718 start.go:167] duration metric: took 26.900683726s to libmachine.API.Create "ha-328109"
	I0318 12:43:59.631553 1125718 start.go:293] postStartSetup for "ha-328109-m02" (driver="kvm2")
	I0318 12:43:59.631563 1125718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:43:59.631591 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:59.631837 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:43:59.631866 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.634073 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.634465 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.634493 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.634672 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.634838 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.635006 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.635141 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	I0318 12:43:59.719880 1125718 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:43:59.724734 1125718 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:43:59.724765 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 12:43:59.724836 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 12:43:59.724941 1125718 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 12:43:59.724955 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /etc/ssl/certs/11141362.pem
	I0318 12:43:59.725063 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:43:59.735849 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:43:59.763701 1125718 start.go:296] duration metric: took 132.132457ms for postStartSetup
	I0318 12:43:59.763785 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetConfigRaw
	I0318 12:43:59.764500 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:43:59.766957 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.767368 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.767398 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.767661 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:43:59.767873 1125718 start.go:128] duration metric: took 27.054886871s to createHost
	I0318 12:43:59.767902 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.770002 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.770239 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.770265 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.770374 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.770568 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.770733 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.770854 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.771011 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:59.771179 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:59.771190 1125718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:43:59.881610 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710765839.855499588
	
	I0318 12:43:59.881635 1125718 fix.go:216] guest clock: 1710765839.855499588
	I0318 12:43:59.881643 1125718 fix.go:229] Guest: 2024-03-18 12:43:59.855499588 +0000 UTC Remote: 2024-03-18 12:43:59.767886325 +0000 UTC m=+86.924566388 (delta=87.613263ms)
	I0318 12:43:59.881660 1125718 fix.go:200] guest clock delta is within tolerance: 87.613263ms
	I0318 12:43:59.881665 1125718 start.go:83] releasing machines lock for "ha-328109-m02", held for 27.168768398s
	I0318 12:43:59.881687 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:59.881991 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:43:59.884387 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.884709 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.884738 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.887185 1125718 out.go:177] * Found network options:
	I0318 12:43:59.888664 1125718 out.go:177]   - NO_PROXY=192.168.39.253
	W0318 12:43:59.890067 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:43:59.890093 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:59.890590 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:59.890776 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:59.890894 1125718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:43:59.890937 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	W0318 12:43:59.891024 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:43:59.891121 1125718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 12:43:59.891150 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.893802 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.894029 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.894197 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.894227 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.894402 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.894417 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.894424 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.894565 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.894641 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.894716 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.894837 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.894882 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.894982 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	I0318 12:43:59.895018 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	I0318 12:44:00.137444 1125718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 12:44:00.144976 1125718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:44:00.145065 1125718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:44:00.164076 1125718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:44:00.164108 1125718 start.go:494] detecting cgroup driver to use...
	I0318 12:44:00.164200 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:44:00.182516 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:44:00.197623 1125718 docker.go:217] disabling cri-docker service (if available) ...
	I0318 12:44:00.197696 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 12:44:00.211897 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 12:44:00.227180 1125718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 12:44:00.345865 1125718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 12:44:00.502719 1125718 docker.go:233] disabling docker service ...
	I0318 12:44:00.502809 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 12:44:00.519062 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 12:44:00.533347 1125718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 12:44:00.696212 1125718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 12:44:00.847684 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 12:44:00.863668 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:44:00.884184 1125718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 12:44:00.884265 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:44:00.896228 1125718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 12:44:00.896307 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:44:00.908261 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:44:00.920135 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:44:00.931813 1125718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:44:00.943845 1125718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:44:00.954328 1125718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 12:44:00.954392 1125718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 12:44:00.968796 1125718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:44:00.980362 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:44:01.108291 1125718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 12:44:01.258438 1125718 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 12:44:01.258528 1125718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 12:44:01.264184 1125718 start.go:562] Will wait 60s for crictl version
	I0318 12:44:01.264242 1125718 ssh_runner.go:195] Run: which crictl
	I0318 12:44:01.268679 1125718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:44:01.309083 1125718 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 12:44:01.309175 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:44:01.341688 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:44:01.380685 1125718 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 12:44:01.382079 1125718 out.go:177]   - env NO_PROXY=192.168.39.253
	I0318 12:44:01.383399 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:44:01.386301 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:44:01.386676 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:44:01.386723 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:44:01.386967 1125718 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 12:44:01.391996 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:44:01.409426 1125718 mustload.go:65] Loading cluster: ha-328109
	I0318 12:44:01.409694 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:44:01.410161 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:44:01.410228 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:44:01.425513 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I0318 12:44:01.425961 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:44:01.426459 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:44:01.426481 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:44:01.426843 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:44:01.427092 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:44:01.428595 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:44:01.428929 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:44:01.428971 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:44:01.443790 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38301
	I0318 12:44:01.444217 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:44:01.444747 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:44:01.444767 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:44:01.445079 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:44:01.445301 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:44:01.445442 1125718 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109 for IP: 192.168.39.246
	I0318 12:44:01.445455 1125718 certs.go:194] generating shared ca certs ...
	I0318 12:44:01.445471 1125718 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:44:01.445601 1125718 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 12:44:01.445640 1125718 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 12:44:01.445657 1125718 certs.go:256] generating profile certs ...
	I0318 12:44:01.445745 1125718 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key
	I0318 12:44:01.445770 1125718 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.57b3cb16
	I0318 12:44:01.445785 1125718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.57b3cb16 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.253 192.168.39.246 192.168.39.254]
	I0318 12:44:01.606268 1125718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.57b3cb16 ...
	I0318 12:44:01.606317 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.57b3cb16: {Name:mk2a28886f0cf302e67691064ed3f588dbab180f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:44:01.606591 1125718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.57b3cb16 ...
	I0318 12:44:01.606622 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.57b3cb16: {Name:mkd5a53db774063ba21335a4cd03a90a402d3183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:44:01.606756 1125718 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.57b3cb16 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt
	I0318 12:44:01.606948 1125718 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.57b3cb16 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key
	I0318 12:44:01.607103 1125718 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key
	I0318 12:44:01.607121 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:44:01.607134 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:44:01.607149 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:44:01.607162 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:44:01.607175 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:44:01.607189 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:44:01.607207 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:44:01.607219 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:44:01.607268 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 12:44:01.607296 1125718 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 12:44:01.607307 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 12:44:01.607327 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 12:44:01.607350 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 12:44:01.607374 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 12:44:01.607416 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:44:01.607442 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:44:01.607456 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem -> /usr/share/ca-certificates/1114136.pem
	I0318 12:44:01.607470 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /usr/share/ca-certificates/11141362.pem
	I0318 12:44:01.607503 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:44:01.610632 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:44:01.611078 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:44:01.611115 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:44:01.611239 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:44:01.611436 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:44:01.611610 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:44:01.611782 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:44:01.684688 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 12:44:01.690075 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 12:44:01.703204 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 12:44:01.707974 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 12:44:01.720905 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 12:44:01.726186 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 12:44:01.740107 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 12:44:01.744851 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0318 12:44:01.758148 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 12:44:01.762774 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 12:44:01.775671 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 12:44:01.786778 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 12:44:01.799266 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:44:01.827937 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:44:01.856117 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:44:01.883951 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:44:01.911446 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 12:44:01.939348 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 12:44:01.967444 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:44:01.994558 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 12:44:02.021422 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:44:02.048126 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 12:44:02.076337 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 12:44:02.105324 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 12:44:02.124464 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 12:44:02.144320 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 12:44:02.165003 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0318 12:44:02.185268 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 12:44:02.204456 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 12:44:02.223428 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 12:44:02.244452 1125718 ssh_runner.go:195] Run: openssl version
	I0318 12:44:02.252358 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:44:02.265498 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:44:02.270971 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:44:02.271039 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:44:02.277641 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:44:02.289405 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 12:44:02.301135 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 12:44:02.306218 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 12:44:02.306278 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 12:44:02.312875 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 12:44:02.325791 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 12:44:02.337824 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 12:44:02.343152 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 12:44:02.343221 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 12:44:02.349879 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:44:02.362348 1125718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:44:02.367336 1125718 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:44:02.367487 1125718 kubeadm.go:928] updating node {m02 192.168.39.246 8443 v1.28.4 crio true true} ...
	I0318 12:44:02.367627 1125718 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-328109-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:44:02.367700 1125718 kube-vip.go:111] generating kube-vip config ...
	I0318 12:44:02.367755 1125718 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 12:44:02.388716 1125718 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 12:44:02.388806 1125718 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 12:44:02.388861 1125718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:44:02.399783 1125718 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 12:44:02.399848 1125718 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 12:44:02.410764 1125718 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0318 12:44:02.410796 1125718 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 12:44:02.410825 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:44:02.410823 1125718 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0318 12:44:02.410912 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:44:02.416963 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 12:44:02.416999 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 12:44:03.510507 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:44:03.510618 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:44:03.517672 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 12:44:03.517711 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 12:44:04.164904 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:44:04.181748 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:44:04.181880 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:44:04.187045 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 12:44:04.187095 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 12:44:04.702348 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 12:44:04.713314 1125718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 12:44:04.732471 1125718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:44:04.751338 1125718 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 12:44:04.769620 1125718 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 12:44:04.774028 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:44:04.787446 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:44:04.926444 1125718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:44:04.946200 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:44:04.946580 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:44:04.946658 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:44:04.962384 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33939
	I0318 12:44:04.962863 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:44:04.963381 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:44:04.963405 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:44:04.963710 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:44:04.963898 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:44:04.964061 1125718 start.go:316] joinCluster: &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:44:04.964158 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 12:44:04.964186 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:44:04.967207 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:44:04.967685 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:44:04.967718 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:44:04.967824 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:44:04.968018 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:44:04.968200 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:44:04.968357 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:44:05.151602 1125718 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:44:05.151654 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zv8kcu.ksllqv02tca6xo0j --discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-328109-m02 --control-plane --apiserver-advertise-address=192.168.39.246 --apiserver-bind-port=8443"
	I0318 12:44:39.138740 1125718 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zv8kcu.ksllqv02tca6xo0j --discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-328109-m02 --control-plane --apiserver-advertise-address=192.168.39.246 --apiserver-bind-port=8443": (33.987054913s)
	I0318 12:44:39.138786 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 12:44:39.701427 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-328109-m02 minikube.k8s.io/updated_at=2024_03_18T12_44_39_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-328109 minikube.k8s.io/primary=false
	I0318 12:44:39.834529 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-328109-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 12:44:39.989005 1125718 start.go:318] duration metric: took 35.024938427s to joinCluster
	I0318 12:44:39.989090 1125718 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:44:39.990643 1125718 out.go:177] * Verifying Kubernetes components...
	I0318 12:44:39.989430 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:44:39.992082 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:44:40.172689 1125718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:44:40.191427 1125718 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:44:40.191790 1125718 kapi.go:59] client config for ha-328109: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt", KeyFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key", CAFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 12:44:40.191918 1125718 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.253:8443
	I0318 12:44:40.192125 1125718 node_ready.go:35] waiting up to 6m0s for node "ha-328109-m02" to be "Ready" ...
	I0318 12:44:40.192223 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:40.192232 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:40.192240 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:40.192243 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:40.206292 1125718 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0318 12:44:40.692492 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:40.692529 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:40.692541 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:40.692545 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:40.698779 1125718 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:41.193389 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:41.193414 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:41.193422 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:41.193427 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:41.197225 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:41.692661 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:41.692692 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:41.692704 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:41.692710 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:41.696648 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:42.192751 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:42.192776 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:42.192784 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:42.192789 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:42.198091 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:42.198596 1125718 node_ready.go:53] node "ha-328109-m02" has status "Ready":"False"
	I0318 12:44:42.693055 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:42.693079 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:42.693087 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:42.693091 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:42.698215 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:43.192659 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:43.192683 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:43.192691 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:43.192696 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:43.197222 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:43.693379 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:43.693412 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:43.693424 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:43.693433 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:43.697459 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:44.192535 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:44.192568 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:44.192579 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:44.192583 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:44.196711 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:44.693164 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:44.693194 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:44.693208 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:44.693214 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:44.698004 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:44.698909 1125718 node_ready.go:53] node "ha-328109-m02" has status "Ready":"False"
	I0318 12:44:45.192741 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:45.192773 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:45.192785 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:45.192791 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:45.196128 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:45.692690 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:45.692720 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:45.692735 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:45.692741 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:45.699442 1125718 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:46.192344 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:46.192374 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:46.192395 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:46.192403 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:46.196573 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:46.692666 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:46.692690 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:46.692698 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:46.692702 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:46.696496 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:47.192551 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:47.192582 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.192593 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.192600 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.197378 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:47.198011 1125718 node_ready.go:53] node "ha-328109-m02" has status "Ready":"False"
	I0318 12:44:47.693331 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:47.693355 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.693363 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.693367 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.696989 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:47.697723 1125718 node_ready.go:49] node "ha-328109-m02" has status "Ready":"True"
	I0318 12:44:47.697751 1125718 node_ready.go:38] duration metric: took 7.505598296s for node "ha-328109-m02" to be "Ready" ...
	I0318 12:44:47.697763 1125718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:44:47.697888 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:47.697902 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.697913 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.697920 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.702944 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:47.710789 1125718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.710880 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c78nc
	I0318 12:44:47.710891 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.710898 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.710903 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.713878 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.714652 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:47.714669 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.714676 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.714680 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.717722 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:47.718289 1125718 pod_ready.go:92] pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:47.718309 1125718 pod_ready.go:81] duration metric: took 7.495849ms for pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.718317 1125718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.718372 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-p5xgj
	I0318 12:44:47.718383 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.718392 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.718397 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.721374 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.722079 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:47.722096 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.722103 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.722106 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.725001 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.725754 1125718 pod_ready.go:92] pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:47.725775 1125718 pod_ready.go:81] duration metric: took 7.449872ms for pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.725786 1125718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.725849 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109
	I0318 12:44:47.725860 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.725869 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.725873 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.728740 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.729338 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:47.729362 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.729372 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.729377 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.731600 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.732193 1125718 pod_ready.go:92] pod "etcd-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:47.732216 1125718 pod_ready.go:81] duration metric: took 6.421921ms for pod "etcd-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.732226 1125718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.732284 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m02
	I0318 12:44:47.732294 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.732304 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.732310 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.735172 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.736170 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:47.736184 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.736191 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.736194 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.738645 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:48.232575 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m02
	I0318 12:44:48.232600 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.232608 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.232612 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.236249 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:48.237186 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:48.237201 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.237208 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.237212 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.240192 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:48.732775 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m02
	I0318 12:44:48.732806 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.732817 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.732821 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.737420 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:48.738187 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:48.738204 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.738211 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.738215 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.742669 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:48.743335 1125718 pod_ready.go:92] pod "etcd-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:48.743361 1125718 pod_ready.go:81] duration metric: took 1.011124464s for pod "etcd-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:48.743375 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:48.743438 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109
	I0318 12:44:48.743449 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.743457 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.743460 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.746224 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:48.894148 1125718 request.go:629] Waited for 147.344101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:48.894248 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:48.894255 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.894266 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.894277 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.897962 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:48.898694 1125718 pod_ready.go:92] pod "kube-apiserver-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:48.898715 1125718 pod_ready.go:81] duration metric: took 155.333585ms for pod "kube-apiserver-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:48.898724 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:49.094198 1125718 request.go:629] Waited for 195.388871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m02
	I0318 12:44:49.094263 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m02
	I0318 12:44:49.094268 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:49.094275 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:49.094279 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:49.097625 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:49.293669 1125718 request.go:629] Waited for 195.236067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:49.293765 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:49.293777 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:49.293786 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:49.293793 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:49.298217 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:49.299392 1125718 pod_ready.go:92] pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:49.299414 1125718 pod_ready.go:81] duration metric: took 400.680904ms for pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:49.299426 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:49.493454 1125718 request.go:629] Waited for 193.947293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109
	I0318 12:44:49.493574 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109
	I0318 12:44:49.493590 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:49.493602 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:49.493608 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:49.497308 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:49.693548 1125718 request.go:629] Waited for 195.307312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:49.693644 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:49.693651 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:49.693661 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:49.693669 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:49.697135 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:49.697952 1125718 pod_ready.go:92] pod "kube-controller-manager-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:49.697973 1125718 pod_ready.go:81] duration metric: took 398.539609ms for pod "kube-controller-manager-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:49.697982 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:49.894099 1125718 request.go:629] Waited for 196.011338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m02
	I0318 12:44:49.894162 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m02
	I0318 12:44:49.894168 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:49.894175 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:49.894180 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:49.898008 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:50.094307 1125718 request.go:629] Waited for 195.496656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:50.094392 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:50.094402 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:50.094410 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.094417 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:50.097499 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:50.098204 1125718 pod_ready.go:92] pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:50.098227 1125718 pod_ready.go:81] duration metric: took 400.237571ms for pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:50.098241 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7zgrx" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:50.294349 1125718 request.go:629] Waited for 196.013287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7zgrx
	I0318 12:44:50.294441 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7zgrx
	I0318 12:44:50.294449 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:50.294463 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:50.294477 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.297723 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:50.493914 1125718 request.go:629] Waited for 195.4196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:50.493994 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:50.494005 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:50.494021 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.494031 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:50.497664 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:50.498503 1125718 pod_ready.go:92] pod "kube-proxy-7zgrx" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:50.498521 1125718 pod_ready.go:81] duration metric: took 400.273288ms for pod "kube-proxy-7zgrx" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:50.498531 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dhz88" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:50.693698 1125718 request.go:629] Waited for 195.074788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhz88
	I0318 12:44:50.693758 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhz88
	I0318 12:44:50.693764 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:50.693771 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.693777 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:50.698606 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:50.893836 1125718 request.go:629] Waited for 193.401828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:50.893896 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:50.893900 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:50.893908 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.893912 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:50.897629 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:50.898245 1125718 pod_ready.go:92] pod "kube-proxy-dhz88" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:50.898264 1125718 pod_ready.go:81] duration metric: took 399.727875ms for pod "kube-proxy-dhz88" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:50.898274 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:51.093420 1125718 request.go:629] Waited for 195.052227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109
	I0318 12:44:51.093485 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109
	I0318 12:44:51.093493 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.093505 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.093512 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.096646 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:51.293509 1125718 request.go:629] Waited for 196.299831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:51.293598 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:51.293607 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.293618 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.293630 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.297967 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:51.298750 1125718 pod_ready.go:92] pod "kube-scheduler-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:51.298772 1125718 pod_ready.go:81] duration metric: took 400.491192ms for pod "kube-scheduler-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:51.298781 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:51.493645 1125718 request.go:629] Waited for 194.786135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m02
	I0318 12:44:51.493711 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m02
	I0318 12:44:51.493718 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.493726 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.493731 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.497700 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:51.693421 1125718 request.go:629] Waited for 195.087932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:51.693487 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:51.693492 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.693500 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.693504 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.697469 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:51.698151 1125718 pod_ready.go:92] pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:51.698187 1125718 pod_ready.go:81] duration metric: took 399.397805ms for pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:51.698202 1125718 pod_ready.go:38] duration metric: took 4.000391721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:44:51.698254 1125718 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:44:51.698314 1125718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:44:51.715057 1125718 api_server.go:72] duration metric: took 11.725914512s to wait for apiserver process to appear ...
	I0318 12:44:51.715080 1125718 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:44:51.715099 1125718 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0318 12:44:51.722073 1125718 api_server.go:279] https://192.168.39.253:8443/healthz returned 200:
	ok
	I0318 12:44:51.722146 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/version
	I0318 12:44:51.722151 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.722159 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.722165 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.723736 1125718 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 12:44:51.723860 1125718 api_server.go:141] control plane version: v1.28.4
	I0318 12:44:51.723884 1125718 api_server.go:131] duration metric: took 8.796153ms to wait for apiserver health ...
	I0318 12:44:51.723895 1125718 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:44:51.894339 1125718 request.go:629] Waited for 170.357624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:51.894406 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:51.894411 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.894419 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.894424 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.900782 1125718 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:51.905365 1125718 system_pods.go:59] 17 kube-system pods found
	I0318 12:44:51.905395 1125718 system_pods.go:61] "coredns-5dd5756b68-c78nc" [7c1159dc-6545-41a6-bb4a-75fdab519c9e] Running
	I0318 12:44:51.905400 1125718 system_pods.go:61] "coredns-5dd5756b68-p5xgj" [9a865f86-96cf-4687-9283-d2ebe5616d1a] Running
	I0318 12:44:51.905404 1125718 system_pods.go:61] "etcd-ha-328109" [46530523-a048-4fff-897d-1a59630b5533] Running
	I0318 12:44:51.905407 1125718 system_pods.go:61] "etcd-ha-328109-m02" [0ed8ba4d-7da4-4c6c-b545-5e8642214659] Running
	I0318 12:44:51.905410 1125718 system_pods.go:61] "kindnet-lc74t" [5fe4e41e-4ddd-4e39-b1e2-746a32489418] Running
	I0318 12:44:51.905413 1125718 system_pods.go:61] "kindnet-vnv5b" [fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6] Running
	I0318 12:44:51.905417 1125718 system_pods.go:61] "kube-apiserver-ha-328109" [47b1b8fb-21f6-43d7-a607-4406dfec10b7] Running
	I0318 12:44:51.905420 1125718 system_pods.go:61] "kube-apiserver-ha-328109-m02" [fcd48f5d-2278-49f3-b4f0-0cad9ae74dc7] Running
	I0318 12:44:51.905423 1125718 system_pods.go:61] "kube-controller-manager-ha-328109" [ffef70fe-841f-41c7-a61b-bb205ce2c071] Running
	I0318 12:44:51.905426 1125718 system_pods.go:61] "kube-controller-manager-ha-328109-m02" [a5ecf731-7599-44e9-b20d-924bde2de123] Running
	I0318 12:44:51.905429 1125718 system_pods.go:61] "kube-proxy-7zgrx" [6244fa40-af4d-480b-9256-db89d78b1d74] Running
	I0318 12:44:51.905432 1125718 system_pods.go:61] "kube-proxy-dhz88" [afb0afad-2b88-4abb-9039-aaf9c64ad920] Running
	I0318 12:44:51.905434 1125718 system_pods.go:61] "kube-scheduler-ha-328109" [a32fb0b4-2621-47dd-bb05-abb2e4cf928e] Running
	I0318 12:44:51.905437 1125718 system_pods.go:61] "kube-scheduler-ha-328109-m02" [14246dc3-5f5f-4d43-954c-5959db738742] Running
	I0318 12:44:51.905439 1125718 system_pods.go:61] "kube-vip-ha-328109" [40c45da5-33e0-454b-8f4c-eca1d1ec3362] Running
	I0318 12:44:51.905441 1125718 system_pods.go:61] "kube-vip-ha-328109-m02" [0c0dc71f-79d7-48f0-8a4a-4480521e5705] Running
	I0318 12:44:51.905444 1125718 system_pods.go:61] "storage-provisioner" [90ce7ae6-4ac4-4c14-b2df-1a182f4d8086] Running
	I0318 12:44:51.905450 1125718 system_pods.go:74] duration metric: took 181.546965ms to wait for pod list to return data ...
	I0318 12:44:51.905457 1125718 default_sa.go:34] waiting for default service account to be created ...
	I0318 12:44:52.093964 1125718 request.go:629] Waited for 188.409787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:44:52.094046 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:44:52.094054 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:52.094065 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:52.094082 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:52.099899 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:52.100194 1125718 default_sa.go:45] found service account: "default"
	I0318 12:44:52.100223 1125718 default_sa.go:55] duration metric: took 194.758383ms for default service account to be created ...
	I0318 12:44:52.100236 1125718 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 12:44:52.293770 1125718 request.go:629] Waited for 193.416795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:52.293850 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:52.293858 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:52.293869 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:52.293880 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:52.301716 1125718 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:44:52.307592 1125718 system_pods.go:86] 17 kube-system pods found
	I0318 12:44:52.307621 1125718 system_pods.go:89] "coredns-5dd5756b68-c78nc" [7c1159dc-6545-41a6-bb4a-75fdab519c9e] Running
	I0318 12:44:52.307626 1125718 system_pods.go:89] "coredns-5dd5756b68-p5xgj" [9a865f86-96cf-4687-9283-d2ebe5616d1a] Running
	I0318 12:44:52.307630 1125718 system_pods.go:89] "etcd-ha-328109" [46530523-a048-4fff-897d-1a59630b5533] Running
	I0318 12:44:52.307634 1125718 system_pods.go:89] "etcd-ha-328109-m02" [0ed8ba4d-7da4-4c6c-b545-5e8642214659] Running
	I0318 12:44:52.307638 1125718 system_pods.go:89] "kindnet-lc74t" [5fe4e41e-4ddd-4e39-b1e2-746a32489418] Running
	I0318 12:44:52.307642 1125718 system_pods.go:89] "kindnet-vnv5b" [fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6] Running
	I0318 12:44:52.307646 1125718 system_pods.go:89] "kube-apiserver-ha-328109" [47b1b8fb-21f6-43d7-a607-4406dfec10b7] Running
	I0318 12:44:52.307650 1125718 system_pods.go:89] "kube-apiserver-ha-328109-m02" [fcd48f5d-2278-49f3-b4f0-0cad9ae74dc7] Running
	I0318 12:44:52.307655 1125718 system_pods.go:89] "kube-controller-manager-ha-328109" [ffef70fe-841f-41c7-a61b-bb205ce2c071] Running
	I0318 12:44:52.307659 1125718 system_pods.go:89] "kube-controller-manager-ha-328109-m02" [a5ecf731-7599-44e9-b20d-924bde2de123] Running
	I0318 12:44:52.307662 1125718 system_pods.go:89] "kube-proxy-7zgrx" [6244fa40-af4d-480b-9256-db89d78b1d74] Running
	I0318 12:44:52.307666 1125718 system_pods.go:89] "kube-proxy-dhz88" [afb0afad-2b88-4abb-9039-aaf9c64ad920] Running
	I0318 12:44:52.307673 1125718 system_pods.go:89] "kube-scheduler-ha-328109" [a32fb0b4-2621-47dd-bb05-abb2e4cf928e] Running
	I0318 12:44:52.307676 1125718 system_pods.go:89] "kube-scheduler-ha-328109-m02" [14246dc3-5f5f-4d43-954c-5959db738742] Running
	I0318 12:44:52.307682 1125718 system_pods.go:89] "kube-vip-ha-328109" [40c45da5-33e0-454b-8f4c-eca1d1ec3362] Running
	I0318 12:44:52.307685 1125718 system_pods.go:89] "kube-vip-ha-328109-m02" [0c0dc71f-79d7-48f0-8a4a-4480521e5705] Running
	I0318 12:44:52.307689 1125718 system_pods.go:89] "storage-provisioner" [90ce7ae6-4ac4-4c14-b2df-1a182f4d8086] Running
	I0318 12:44:52.307696 1125718 system_pods.go:126] duration metric: took 207.453689ms to wait for k8s-apps to be running ...
	I0318 12:44:52.307706 1125718 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:44:52.307754 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:44:52.324436 1125718 system_svc.go:56] duration metric: took 16.716482ms WaitForService to wait for kubelet
	I0318 12:44:52.324477 1125718 kubeadm.go:576] duration metric: took 12.335337661s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:44:52.324505 1125718 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:44:52.493932 1125718 request.go:629] Waited for 169.333092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes
	I0318 12:44:52.494026 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes
	I0318 12:44:52.494034 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:52.494043 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:52.494053 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:52.498708 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:52.499905 1125718 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:44:52.499939 1125718 node_conditions.go:123] node cpu capacity is 2
	I0318 12:44:52.499957 1125718 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:44:52.499963 1125718 node_conditions.go:123] node cpu capacity is 2
	I0318 12:44:52.499969 1125718 node_conditions.go:105] duration metric: took 175.457735ms to run NodePressure ...
	I0318 12:44:52.499989 1125718 start.go:240] waiting for startup goroutines ...
	I0318 12:44:52.500025 1125718 start.go:254] writing updated cluster config ...
	I0318 12:44:52.502267 1125718 out.go:177] 
	I0318 12:44:52.503771 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:44:52.503869 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:44:52.505439 1125718 out.go:177] * Starting "ha-328109-m03" control-plane node in "ha-328109" cluster
	I0318 12:44:52.506742 1125718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:44:52.506765 1125718 cache.go:56] Caching tarball of preloaded images
	I0318 12:44:52.506870 1125718 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 12:44:52.506882 1125718 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 12:44:52.506968 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:44:52.507127 1125718 start.go:360] acquireMachinesLock for ha-328109-m03: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:44:52.507168 1125718 start.go:364] duration metric: took 21.296µs to acquireMachinesLock for "ha-328109-m03"
	I0318 12:44:52.507184 1125718 start.go:93] Provisioning new machine with config: &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:44:52.507271 1125718 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0318 12:44:52.508878 1125718 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 12:44:52.508973 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:44:52.509008 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:44:52.525328 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40131
	I0318 12:44:52.525842 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:44:52.526480 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:44:52.526510 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:44:52.526929 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:44:52.527151 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetMachineName
	I0318 12:44:52.527339 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:44:52.527518 1125718 start.go:159] libmachine.API.Create for "ha-328109" (driver="kvm2")
	I0318 12:44:52.527552 1125718 client.go:168] LocalClient.Create starting
	I0318 12:44:52.527592 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem
	I0318 12:44:52.527631 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:44:52.527653 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:44:52.527725 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem
	I0318 12:44:52.527753 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:44:52.527772 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:44:52.527802 1125718 main.go:141] libmachine: Running pre-create checks...
	I0318 12:44:52.527817 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .PreCreateCheck
	I0318 12:44:52.528040 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetConfigRaw
	I0318 12:44:52.528442 1125718 main.go:141] libmachine: Creating machine...
	I0318 12:44:52.528459 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .Create
	I0318 12:44:52.528614 1125718 main.go:141] libmachine: (ha-328109-m03) Creating KVM machine...
	I0318 12:44:52.529834 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found existing default KVM network
	I0318 12:44:52.529984 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found existing private KVM network mk-ha-328109
	I0318 12:44:52.530979 1125718 main.go:141] libmachine: (ha-328109-m03) Setting up store path in /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03 ...
	I0318 12:44:52.531009 1125718 main.go:141] libmachine: (ha-328109-m03) Building disk image from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 12:44:52.531077 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:52.530943 1126406 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:44:52.531179 1125718 main.go:141] libmachine: (ha-328109-m03) Downloading /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:44:52.803112 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:52.802962 1126406 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa...
	I0318 12:44:52.948668 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:52.948527 1126406 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/ha-328109-m03.rawdisk...
	I0318 12:44:52.948713 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Writing magic tar header
	I0318 12:44:52.948731 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Writing SSH key tar header
	I0318 12:44:52.948750 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:52.948711 1126406 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03 ...
	I0318 12:44:52.948911 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03
	I0318 12:44:52.948940 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines
	I0318 12:44:52.948951 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03 (perms=drwx------)
	I0318 12:44:52.948961 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines (perms=drwxr-xr-x)
	I0318 12:44:52.948971 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube (perms=drwxr-xr-x)
	I0318 12:44:52.948999 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816 (perms=drwxrwxr-x)
	I0318 12:44:52.949008 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 12:44:52.949020 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 12:44:52.949040 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:44:52.949052 1125718 main.go:141] libmachine: (ha-328109-m03) Creating domain...
	I0318 12:44:52.949071 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816
	I0318 12:44:52.949083 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 12:44:52.949091 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins
	I0318 12:44:52.949098 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home
	I0318 12:44:52.949106 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Skipping /home - not owner
	I0318 12:44:52.950104 1125718 main.go:141] libmachine: (ha-328109-m03) define libvirt domain using xml: 
	I0318 12:44:52.950124 1125718 main.go:141] libmachine: (ha-328109-m03) <domain type='kvm'>
	I0318 12:44:52.950135 1125718 main.go:141] libmachine: (ha-328109-m03)   <name>ha-328109-m03</name>
	I0318 12:44:52.950143 1125718 main.go:141] libmachine: (ha-328109-m03)   <memory unit='MiB'>2200</memory>
	I0318 12:44:52.950164 1125718 main.go:141] libmachine: (ha-328109-m03)   <vcpu>2</vcpu>
	I0318 12:44:52.950179 1125718 main.go:141] libmachine: (ha-328109-m03)   <features>
	I0318 12:44:52.950185 1125718 main.go:141] libmachine: (ha-328109-m03)     <acpi/>
	I0318 12:44:52.950190 1125718 main.go:141] libmachine: (ha-328109-m03)     <apic/>
	I0318 12:44:52.950198 1125718 main.go:141] libmachine: (ha-328109-m03)     <pae/>
	I0318 12:44:52.950202 1125718 main.go:141] libmachine: (ha-328109-m03)     
	I0318 12:44:52.950210 1125718 main.go:141] libmachine: (ha-328109-m03)   </features>
	I0318 12:44:52.950216 1125718 main.go:141] libmachine: (ha-328109-m03)   <cpu mode='host-passthrough'>
	I0318 12:44:52.950223 1125718 main.go:141] libmachine: (ha-328109-m03)   
	I0318 12:44:52.950228 1125718 main.go:141] libmachine: (ha-328109-m03)   </cpu>
	I0318 12:44:52.950240 1125718 main.go:141] libmachine: (ha-328109-m03)   <os>
	I0318 12:44:52.950254 1125718 main.go:141] libmachine: (ha-328109-m03)     <type>hvm</type>
	I0318 12:44:52.950267 1125718 main.go:141] libmachine: (ha-328109-m03)     <boot dev='cdrom'/>
	I0318 12:44:52.950277 1125718 main.go:141] libmachine: (ha-328109-m03)     <boot dev='hd'/>
	I0318 12:44:52.950287 1125718 main.go:141] libmachine: (ha-328109-m03)     <bootmenu enable='no'/>
	I0318 12:44:52.950302 1125718 main.go:141] libmachine: (ha-328109-m03)   </os>
	I0318 12:44:52.950320 1125718 main.go:141] libmachine: (ha-328109-m03)   <devices>
	I0318 12:44:52.950335 1125718 main.go:141] libmachine: (ha-328109-m03)     <disk type='file' device='cdrom'>
	I0318 12:44:52.950361 1125718 main.go:141] libmachine: (ha-328109-m03)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/boot2docker.iso'/>
	I0318 12:44:52.950374 1125718 main.go:141] libmachine: (ha-328109-m03)       <target dev='hdc' bus='scsi'/>
	I0318 12:44:52.950385 1125718 main.go:141] libmachine: (ha-328109-m03)       <readonly/>
	I0318 12:44:52.950394 1125718 main.go:141] libmachine: (ha-328109-m03)     </disk>
	I0318 12:44:52.950404 1125718 main.go:141] libmachine: (ha-328109-m03)     <disk type='file' device='disk'>
	I0318 12:44:52.950421 1125718 main.go:141] libmachine: (ha-328109-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 12:44:52.950438 1125718 main.go:141] libmachine: (ha-328109-m03)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/ha-328109-m03.rawdisk'/>
	I0318 12:44:52.950450 1125718 main.go:141] libmachine: (ha-328109-m03)       <target dev='hda' bus='virtio'/>
	I0318 12:44:52.950459 1125718 main.go:141] libmachine: (ha-328109-m03)     </disk>
	I0318 12:44:52.950470 1125718 main.go:141] libmachine: (ha-328109-m03)     <interface type='network'>
	I0318 12:44:52.950480 1125718 main.go:141] libmachine: (ha-328109-m03)       <source network='mk-ha-328109'/>
	I0318 12:44:52.950490 1125718 main.go:141] libmachine: (ha-328109-m03)       <model type='virtio'/>
	I0318 12:44:52.950499 1125718 main.go:141] libmachine: (ha-328109-m03)     </interface>
	I0318 12:44:52.950511 1125718 main.go:141] libmachine: (ha-328109-m03)     <interface type='network'>
	I0318 12:44:52.950522 1125718 main.go:141] libmachine: (ha-328109-m03)       <source network='default'/>
	I0318 12:44:52.950537 1125718 main.go:141] libmachine: (ha-328109-m03)       <model type='virtio'/>
	I0318 12:44:52.950549 1125718 main.go:141] libmachine: (ha-328109-m03)     </interface>
	I0318 12:44:52.950560 1125718 main.go:141] libmachine: (ha-328109-m03)     <serial type='pty'>
	I0318 12:44:52.950571 1125718 main.go:141] libmachine: (ha-328109-m03)       <target port='0'/>
	I0318 12:44:52.950581 1125718 main.go:141] libmachine: (ha-328109-m03)     </serial>
	I0318 12:44:52.950593 1125718 main.go:141] libmachine: (ha-328109-m03)     <console type='pty'>
	I0318 12:44:52.950604 1125718 main.go:141] libmachine: (ha-328109-m03)       <target type='serial' port='0'/>
	I0318 12:44:52.950614 1125718 main.go:141] libmachine: (ha-328109-m03)     </console>
	I0318 12:44:52.950627 1125718 main.go:141] libmachine: (ha-328109-m03)     <rng model='virtio'>
	I0318 12:44:52.950640 1125718 main.go:141] libmachine: (ha-328109-m03)       <backend model='random'>/dev/random</backend>
	I0318 12:44:52.950649 1125718 main.go:141] libmachine: (ha-328109-m03)     </rng>
	I0318 12:44:52.950659 1125718 main.go:141] libmachine: (ha-328109-m03)     
	I0318 12:44:52.950667 1125718 main.go:141] libmachine: (ha-328109-m03)     
	I0318 12:44:52.950677 1125718 main.go:141] libmachine: (ha-328109-m03)   </devices>
	I0318 12:44:52.950688 1125718 main.go:141] libmachine: (ha-328109-m03) </domain>
	I0318 12:44:52.950700 1125718 main.go:141] libmachine: (ha-328109-m03) 
	I0318 12:44:52.957471 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:93:6f:6c in network default
	I0318 12:44:52.958076 1125718 main.go:141] libmachine: (ha-328109-m03) Ensuring networks are active...
	I0318 12:44:52.958099 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:52.958820 1125718 main.go:141] libmachine: (ha-328109-m03) Ensuring network default is active
	I0318 12:44:52.959163 1125718 main.go:141] libmachine: (ha-328109-m03) Ensuring network mk-ha-328109 is active
	I0318 12:44:52.959518 1125718 main.go:141] libmachine: (ha-328109-m03) Getting domain xml...
	I0318 12:44:52.960231 1125718 main.go:141] libmachine: (ha-328109-m03) Creating domain...
	I0318 12:44:54.207064 1125718 main.go:141] libmachine: (ha-328109-m03) Waiting to get IP...
	I0318 12:44:54.208551 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:54.209498 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:54.209537 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:54.209471 1126406 retry.go:31] will retry after 246.112418ms: waiting for machine to come up
	I0318 12:44:54.457148 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:54.457868 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:54.457935 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:54.457750 1126406 retry.go:31] will retry after 279.428831ms: waiting for machine to come up
	I0318 12:44:54.739458 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:54.739925 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:54.739957 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:54.739895 1126406 retry.go:31] will retry after 436.062724ms: waiting for machine to come up
	I0318 12:44:55.177575 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:55.178132 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:55.178163 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:55.178078 1126406 retry.go:31] will retry after 490.275413ms: waiting for machine to come up
	I0318 12:44:55.669861 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:55.670424 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:55.670460 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:55.670369 1126406 retry.go:31] will retry after 633.010114ms: waiting for machine to come up
	I0318 12:44:56.304966 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:56.305467 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:56.305492 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:56.305431 1126406 retry.go:31] will retry after 889.156096ms: waiting for machine to come up
	I0318 12:44:57.196816 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:57.197381 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:57.197415 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:57.197350 1126406 retry.go:31] will retry after 1.013553214s: waiting for machine to come up
	I0318 12:44:58.212914 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:58.213383 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:58.213413 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:58.213336 1126406 retry.go:31] will retry after 1.302275369s: waiting for machine to come up
	I0318 12:44:59.517671 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:59.518056 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:59.518089 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:59.518002 1126406 retry.go:31] will retry after 1.691239088s: waiting for machine to come up
	I0318 12:45:01.211342 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:01.211830 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:45:01.211855 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:45:01.211795 1126406 retry.go:31] will retry after 1.472197751s: waiting for machine to come up
	I0318 12:45:02.686158 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:02.686681 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:45:02.686712 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:45:02.686653 1126406 retry.go:31] will retry after 2.792712555s: waiting for machine to come up
	I0318 12:45:05.481952 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:05.482411 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:45:05.482466 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:45:05.482381 1126406 retry.go:31] will retry after 3.275189677s: waiting for machine to come up
	I0318 12:45:08.758986 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:08.759372 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:45:08.759404 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:45:08.759316 1126406 retry.go:31] will retry after 4.535450098s: waiting for machine to come up
	I0318 12:45:13.296855 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:13.297384 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:45:13.297410 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:45:13.297328 1126406 retry.go:31] will retry after 3.801826868s: waiting for machine to come up
	I0318 12:45:17.101660 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.102181 1125718 main.go:141] libmachine: (ha-328109-m03) Found IP for machine: 192.168.39.241
	I0318 12:45:17.102212 1125718 main.go:141] libmachine: (ha-328109-m03) Reserving static IP address...
	I0318 12:45:17.102227 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has current primary IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.102652 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find host DHCP lease matching {name: "ha-328109-m03", mac: "52:54:00:13:6e:ac", ip: "192.168.39.241"} in network mk-ha-328109
	I0318 12:45:17.177177 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Getting to WaitForSSH function...
	I0318 12:45:17.177210 1125718 main.go:141] libmachine: (ha-328109-m03) Reserved static IP address: 192.168.39.241
	I0318 12:45:17.177225 1125718 main.go:141] libmachine: (ha-328109-m03) Waiting for SSH to be available...
	I0318 12:45:17.180030 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.180526 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.180567 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.180681 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Using SSH client type: external
	I0318 12:45:17.180719 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa (-rw-------)
	I0318 12:45:17.180767 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:45:17.180791 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | About to run SSH command:
	I0318 12:45:17.180840 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | exit 0
	I0318 12:45:17.308873 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | SSH cmd err, output: <nil>: 
	I0318 12:45:17.309166 1125718 main.go:141] libmachine: (ha-328109-m03) KVM machine creation complete!
	I0318 12:45:17.309498 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetConfigRaw
	I0318 12:45:17.310106 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:17.310336 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:17.310540 1125718 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 12:45:17.310554 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:45:17.311931 1125718 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 12:45:17.311946 1125718 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 12:45:17.311951 1125718 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 12:45:17.311957 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.314381 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.314805 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.314845 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.315020 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:17.315191 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.315352 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.315515 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:17.315726 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:17.315998 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:17.316011 1125718 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 12:45:17.423835 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:45:17.423881 1125718 main.go:141] libmachine: Detecting the provisioner...
	I0318 12:45:17.423892 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.427406 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.427915 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.427949 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.428139 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:17.428410 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.428605 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.428778 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:17.429011 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:17.429256 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:17.429276 1125718 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 12:45:17.541758 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 12:45:17.541850 1125718 main.go:141] libmachine: found compatible host: buildroot
	I0318 12:45:17.541864 1125718 main.go:141] libmachine: Provisioning with buildroot...
	I0318 12:45:17.541875 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetMachineName
	I0318 12:45:17.542163 1125718 buildroot.go:166] provisioning hostname "ha-328109-m03"
	I0318 12:45:17.542193 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetMachineName
	I0318 12:45:17.542411 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.545194 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.545676 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.545702 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.545843 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:17.546009 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.546212 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.546398 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:17.546645 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:17.546862 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:17.546880 1125718 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-328109-m03 && echo "ha-328109-m03" | sudo tee /etc/hostname
	I0318 12:45:17.672890 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-328109-m03
	
	I0318 12:45:17.672925 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.675623 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.676056 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.676081 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.676336 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:17.676540 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.676738 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.676879 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:17.677040 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:17.677242 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:17.677260 1125718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-328109-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-328109-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-328109-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:45:17.801256 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:45:17.801294 1125718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 12:45:17.801317 1125718 buildroot.go:174] setting up certificates
	I0318 12:45:17.801332 1125718 provision.go:84] configureAuth start
	I0318 12:45:17.801344 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetMachineName
	I0318 12:45:17.801667 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:45:17.804353 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.804704 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.804738 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.804921 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.807223 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.807552 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.807582 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.807692 1125718 provision.go:143] copyHostCerts
	I0318 12:45:17.807730 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:45:17.807775 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 12:45:17.807799 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:45:17.807894 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 12:45:17.808000 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:45:17.808026 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 12:45:17.808033 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:45:17.808077 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 12:45:17.808158 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:45:17.808182 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 12:45:17.808188 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:45:17.808225 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 12:45:17.808313 1125718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.ha-328109-m03 san=[127.0.0.1 192.168.39.241 ha-328109-m03 localhost minikube]
	I0318 12:45:17.968101 1125718 provision.go:177] copyRemoteCerts
	I0318 12:45:17.968179 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:45:17.968215 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.970992 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.971328 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.971365 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.971544 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:17.971748 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.971875 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:17.972027 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:45:18.059601 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 12:45:18.059684 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:45:18.090751 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 12:45:18.090826 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 12:45:18.118403 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 12:45:18.118481 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 12:45:18.147188 1125718 provision.go:87] duration metric: took 345.837123ms to configureAuth
	I0318 12:45:18.147232 1125718 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:45:18.147476 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:45:18.147562 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:18.150390 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.150771 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.150810 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.150989 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:18.151216 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.151402 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.151589 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:18.151753 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:18.151946 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:18.151961 1125718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 12:45:18.457910 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 12:45:18.457945 1125718 main.go:141] libmachine: Checking connection to Docker...
	I0318 12:45:18.457956 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetURL
	I0318 12:45:18.459537 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Using libvirt version 6000000
	I0318 12:45:18.462170 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.462545 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.462574 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.462861 1125718 main.go:141] libmachine: Docker is up and running!
	I0318 12:45:18.462877 1125718 main.go:141] libmachine: Reticulating splines...
	I0318 12:45:18.462884 1125718 client.go:171] duration metric: took 25.935321178s to LocalClient.Create
	I0318 12:45:18.462909 1125718 start.go:167] duration metric: took 25.935392452s to libmachine.API.Create "ha-328109"
	I0318 12:45:18.462919 1125718 start.go:293] postStartSetup for "ha-328109-m03" (driver="kvm2")
	I0318 12:45:18.462930 1125718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:45:18.462947 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:18.463202 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:45:18.463233 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:18.465465 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.465803 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.465829 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.465977 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:18.466171 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.466322 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:18.466492 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:45:18.552562 1125718 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:45:18.557953 1125718 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:45:18.557984 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 12:45:18.558062 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 12:45:18.558151 1125718 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 12:45:18.558163 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /etc/ssl/certs/11141362.pem
	I0318 12:45:18.558279 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:45:18.568566 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:45:18.599564 1125718 start.go:296] duration metric: took 136.628629ms for postStartSetup
	I0318 12:45:18.599636 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetConfigRaw
	I0318 12:45:18.600236 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:45:18.603196 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.603548 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.603593 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.603857 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:45:18.604103 1125718 start.go:128] duration metric: took 26.096819646s to createHost
	I0318 12:45:18.604129 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:18.606491 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.606891 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.606919 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.607116 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:18.607296 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.607508 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.607684 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:18.607898 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:18.608081 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:18.608095 1125718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:45:18.722293 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710765918.693436013
	
	I0318 12:45:18.722318 1125718 fix.go:216] guest clock: 1710765918.693436013
	I0318 12:45:18.722326 1125718 fix.go:229] Guest: 2024-03-18 12:45:18.693436013 +0000 UTC Remote: 2024-03-18 12:45:18.604118512 +0000 UTC m=+165.760798563 (delta=89.317501ms)
	I0318 12:45:18.722343 1125718 fix.go:200] guest clock delta is within tolerance: 89.317501ms
	I0318 12:45:18.722348 1125718 start.go:83] releasing machines lock for "ha-328109-m03", held for 26.21517349s
	I0318 12:45:18.722373 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:18.722708 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:45:18.725969 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.726353 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.726379 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.728674 1125718 out.go:177] * Found network options:
	I0318 12:45:18.730130 1125718 out.go:177]   - NO_PROXY=192.168.39.253,192.168.39.246
	W0318 12:45:18.731523 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 12:45:18.731550 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:45:18.731569 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:18.732113 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:18.732354 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:18.732468 1125718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:45:18.732501 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	W0318 12:45:18.732540 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 12:45:18.732564 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:45:18.732633 1125718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 12:45:18.732655 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:18.735374 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.735399 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.735796 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.735826 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.735847 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.735912 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.736013 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:18.736169 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:18.736245 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.736374 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.736394 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:18.736548 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:18.736564 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:45:18.736656 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:45:18.990045 1125718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 12:45:18.997609 1125718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:45:18.997696 1125718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:45:19.016284 1125718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:45:19.016317 1125718 start.go:494] detecting cgroup driver to use...
	I0318 12:45:19.016414 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:45:19.036959 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:45:19.052702 1125718 docker.go:217] disabling cri-docker service (if available) ...
	I0318 12:45:19.052763 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 12:45:19.068812 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 12:45:19.083885 1125718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 12:45:19.219762 1125718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 12:45:19.375140 1125718 docker.go:233] disabling docker service ...
	I0318 12:45:19.375218 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 12:45:19.391700 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 12:45:19.408089 1125718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 12:45:19.568781 1125718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 12:45:19.698388 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 12:45:19.715205 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:45:19.737848 1125718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 12:45:19.737915 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:19.751205 1125718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 12:45:19.751291 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:19.764038 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:19.776823 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:19.789620 1125718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:45:19.802402 1125718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:45:19.814327 1125718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 12:45:19.814391 1125718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 12:45:19.830755 1125718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:45:19.842732 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:45:19.990158 1125718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 12:45:20.152548 1125718 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 12:45:20.152643 1125718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 12:45:20.158364 1125718 start.go:562] Will wait 60s for crictl version
	I0318 12:45:20.158447 1125718 ssh_runner.go:195] Run: which crictl
	I0318 12:45:20.163229 1125718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:45:20.206997 1125718 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 12:45:20.207092 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:45:20.237899 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:45:20.272643 1125718 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 12:45:20.273996 1125718 out.go:177]   - env NO_PROXY=192.168.39.253
	I0318 12:45:20.275201 1125718 out.go:177]   - env NO_PROXY=192.168.39.253,192.168.39.246
	I0318 12:45:20.276497 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:45:20.279284 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:20.279647 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:20.279682 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:20.279940 1125718 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 12:45:20.285192 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:45:20.300521 1125718 mustload.go:65] Loading cluster: ha-328109
	I0318 12:45:20.300759 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:45:20.301216 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:45:20.301264 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:45:20.317712 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0318 12:45:20.318296 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:45:20.318799 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:45:20.318825 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:45:20.319152 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:45:20.319346 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:45:20.320937 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:45:20.321271 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:45:20.321318 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:45:20.335906 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34807
	I0318 12:45:20.336389 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:45:20.336872 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:45:20.336893 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:45:20.337221 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:45:20.337425 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:45:20.337587 1125718 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109 for IP: 192.168.39.241
	I0318 12:45:20.337600 1125718 certs.go:194] generating shared ca certs ...
	I0318 12:45:20.337616 1125718 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:20.337745 1125718 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 12:45:20.337792 1125718 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 12:45:20.337801 1125718 certs.go:256] generating profile certs ...
	I0318 12:45:20.337915 1125718 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key
	I0318 12:45:20.337951 1125718 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.1e9447cf
	I0318 12:45:20.337968 1125718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.1e9447cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.253 192.168.39.246 192.168.39.241 192.168.39.254]
	I0318 12:45:20.529819 1125718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.1e9447cf ...
	I0318 12:45:20.529854 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.1e9447cf: {Name:mk0c3c37f6163a623e76fa06f4a7e365e62d341b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:20.530058 1125718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.1e9447cf ...
	I0318 12:45:20.530078 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.1e9447cf: {Name:mk6476b5a8deedc75938b726c0d94d4f542498da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:20.530178 1125718 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.1e9447cf -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt
	I0318 12:45:20.530328 1125718 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.1e9447cf -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key
	I0318 12:45:20.530512 1125718 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key
	I0318 12:45:20.530533 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:45:20.530555 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:45:20.530573 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:45:20.530590 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:45:20.530607 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:45:20.530622 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:45:20.530639 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:45:20.530656 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:45:20.530720 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 12:45:20.530760 1125718 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 12:45:20.530774 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 12:45:20.530809 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 12:45:20.530838 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 12:45:20.530866 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 12:45:20.530919 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:45:20.530954 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem -> /usr/share/ca-certificates/1114136.pem
	I0318 12:45:20.530976 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /usr/share/ca-certificates/11141362.pem
	I0318 12:45:20.530994 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:20.531037 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:45:20.534286 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:45:20.534750 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:45:20.534777 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:45:20.534944 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:45:20.535168 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:45:20.535341 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:45:20.535486 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:45:20.612719 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 12:45:20.619208 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 12:45:20.635267 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 12:45:20.640381 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 12:45:20.655052 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 12:45:20.659704 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 12:45:20.672182 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 12:45:20.676767 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0318 12:45:20.689124 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 12:45:20.693592 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 12:45:20.705537 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 12:45:20.710335 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 12:45:20.723667 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:45:20.752170 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:45:20.779440 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:45:20.807077 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:45:20.835454 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0318 12:45:20.865041 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 12:45:20.894868 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:45:20.921846 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 12:45:20.949792 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 12:45:20.976855 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 12:45:21.004675 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:45:21.031367 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 12:45:21.050437 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 12:45:21.069848 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 12:45:21.089292 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0318 12:45:21.108785 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 12:45:21.129862 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 12:45:21.150726 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 12:45:21.171268 1125718 ssh_runner.go:195] Run: openssl version
	I0318 12:45:21.177884 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 12:45:21.190013 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 12:45:21.195104 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 12:45:21.195164 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 12:45:21.201374 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:45:21.214520 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:45:21.227156 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:21.232263 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:21.232344 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:21.238733 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:45:21.253325 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 12:45:21.266067 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 12:45:21.270989 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 12:45:21.271054 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 12:45:21.277455 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 12:45:21.290385 1125718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:45:21.295157 1125718 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:45:21.295210 1125718 kubeadm.go:928] updating node {m03 192.168.39.241 8443 v1.28.4 crio true true} ...
	I0318 12:45:21.295303 1125718 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-328109-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:45:21.295347 1125718 kube-vip.go:111] generating kube-vip config ...
	I0318 12:45:21.295406 1125718 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 12:45:21.314331 1125718 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 12:45:21.314409 1125718 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 12:45:21.314468 1125718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:45:21.326579 1125718 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 12:45:21.326640 1125718 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 12:45:21.338387 1125718 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 12:45:21.338419 1125718 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 12:45:21.338431 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:45:21.338443 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:45:21.338387 1125718 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 12:45:21.338515 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:45:21.338525 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:45:21.338517 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:45:21.349806 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 12:45:21.349837 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 12:45:21.366555 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 12:45:21.366598 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 12:45:21.374524 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:45:21.374679 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:45:21.444178 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 12:45:21.444229 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 12:45:22.371248 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 12:45:22.383173 1125718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 12:45:22.402507 1125718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:45:22.425078 1125718 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 12:45:22.445650 1125718 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 12:45:22.450703 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:45:22.467786 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:45:22.614349 1125718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:45:22.638106 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:45:22.638499 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:45:22.638546 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:45:22.657862 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I0318 12:45:22.658327 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:45:22.658989 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:45:22.659017 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:45:22.659440 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:45:22.659667 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:45:22.659850 1125718 start.go:316] joinCluster: &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:45:22.660004 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 12:45:22.660033 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:45:22.663173 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:45:22.663690 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:45:22.663720 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:45:22.663838 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:45:22.663984 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:45:22.664180 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:45:22.664390 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:45:22.838804 1125718 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:45:22.838876 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gxp8r6.mdkqjq2zkbxrcymg --discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-328109-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443"
	I0318 12:45:50.731898 1125718 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gxp8r6.mdkqjq2zkbxrcymg --discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-328109-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443": (27.892978911s)
	I0318 12:45:50.731948 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 12:45:51.211696 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-328109-m03 minikube.k8s.io/updated_at=2024_03_18T12_45_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-328109 minikube.k8s.io/primary=false
	I0318 12:45:51.347766 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-328109-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 12:45:51.629097 1125718 start.go:318] duration metric: took 28.9692463s to joinCluster
	I0318 12:45:51.629188 1125718 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:45:51.630896 1125718 out.go:177] * Verifying Kubernetes components...
	I0318 12:45:51.629591 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:45:51.632402 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:45:51.867512 1125718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:45:51.892575 1125718 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:45:51.892862 1125718 kapi.go:59] client config for ha-328109: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt", KeyFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key", CAFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 12:45:51.892946 1125718 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.253:8443
	I0318 12:45:51.893360 1125718 node_ready.go:35] waiting up to 6m0s for node "ha-328109-m03" to be "Ready" ...
	I0318 12:45:51.893468 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:51.893480 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:51.893491 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:51.893501 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:51.898804 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:45:52.393567 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:52.393593 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:52.393603 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:52.393610 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:52.401730 1125718 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:45:52.894375 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:52.896002 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:52.896018 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:52.896025 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:52.900164 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:53.393753 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:53.393780 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:53.393792 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:53.393797 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:53.398510 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:53.893965 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:53.893987 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:53.893994 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:53.893998 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:53.898790 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:53.899477 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:45:54.393821 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:54.393847 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:54.393859 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:54.393863 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:54.398316 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:54.894017 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:54.894048 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:54.894060 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:54.894077 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:54.899430 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:45:55.394451 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:55.394483 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:55.394496 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:55.394503 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:55.398830 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:55.893821 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:55.893848 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:55.893857 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:55.893862 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:55.909745 1125718 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 12:45:55.911477 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:45:56.393669 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:56.393693 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:56.393704 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:56.393709 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:56.398173 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:56.894569 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:56.894591 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:56.894599 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:56.894602 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:56.900110 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:45:57.394317 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:57.394342 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:57.394351 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:57.394359 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:57.397886 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:45:57.894606 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:57.895060 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:57.895074 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:57.895079 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:57.898995 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:45:58.393654 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:58.393683 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:58.393696 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:58.393703 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:58.397159 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:45:58.397868 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:45:58.893883 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:58.893910 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:58.893922 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:58.893928 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:58.902293 1125718 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:45:59.394041 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:59.394062 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:59.394068 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:59.394071 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:59.398132 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:59.893979 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:59.894003 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:59.894014 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:59.894021 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:59.897460 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:00.394115 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:00.394138 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:00.394147 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:00.394151 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:00.398083 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:00.399038 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:00.894443 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:00.894467 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:00.894475 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:00.894479 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:00.899326 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:01.393631 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:01.393654 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:01.393663 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:01.393667 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:01.398456 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:01.893764 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:01.893787 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:01.893795 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:01.893799 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:01.897615 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:02.393625 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:02.393659 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:02.393671 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:02.393677 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:02.397586 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:02.893876 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:02.895638 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:02.895653 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:02.895658 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:02.899810 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:02.900842 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:03.393710 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:03.393732 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:03.393740 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:03.393746 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:03.397970 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:03.894001 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:03.894027 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:03.894035 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:03.894039 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:03.897793 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:04.393830 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:04.393853 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:04.393861 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:04.393865 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:04.398651 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:04.894253 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:04.894278 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:04.894289 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:04.894294 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:04.898997 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:05.393698 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:05.393720 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:05.393729 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:05.393733 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:05.397991 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:05.399145 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:05.894516 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:05.894602 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:05.894620 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:05.894628 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:05.899519 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:06.393596 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:06.393624 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:06.393632 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:06.393637 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:06.397271 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:06.894444 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:06.894481 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:06.894492 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:06.894498 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:06.897984 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:07.394041 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:07.394066 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:07.394078 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:07.394083 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:07.398311 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:07.399243 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:07.894009 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:07.895586 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:07.895603 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:07.895609 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:07.899914 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:08.394480 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:08.394513 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:08.394524 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:08.394530 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:08.398758 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:08.893705 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:08.893731 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:08.893739 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:08.893744 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:08.897546 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:09.393604 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:09.393628 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:09.393667 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:09.393675 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:09.397095 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:09.894001 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:09.894026 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:09.894034 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:09.894039 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:09.897285 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:09.897967 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:10.393902 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:10.393925 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:10.393933 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:10.393939 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:10.398063 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:10.894302 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:10.894330 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:10.894341 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:10.894348 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:10.899069 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:11.393653 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:11.393682 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:11.393692 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:11.393697 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:11.405400 1125718 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 12:46:11.894092 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:11.894120 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:11.894132 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:11.894139 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:11.898751 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:11.899585 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:12.394474 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:12.394505 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:12.394518 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:12.394522 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:12.398293 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:12.894092 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:12.895791 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:12.895807 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:12.895813 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:12.900385 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:13.394215 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:13.394242 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:13.394252 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:13.394257 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:13.398059 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:13.893948 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:13.893976 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:13.893988 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:13.893994 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:13.898230 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:14.394458 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:14.394486 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:14.394495 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:14.394499 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:14.398713 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:14.400120 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:14.894544 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:14.894573 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:14.894586 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:14.894596 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:14.898948 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:15.394456 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:15.394483 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:15.394491 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:15.394495 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:15.398152 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:15.894240 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:15.894265 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:15.894273 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:15.894279 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:15.898224 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:16.394267 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:16.394293 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:16.394305 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:16.394312 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:16.398395 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:16.893788 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:16.893811 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:16.893819 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:16.893823 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:16.897947 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:16.898594 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:17.393570 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:17.393596 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:17.393608 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:17.393614 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:17.398598 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:17.894271 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:17.895991 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:17.896007 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:17.896012 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:17.900447 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:18.394084 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:18.394116 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:18.394125 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:18.394131 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:18.398039 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:18.894059 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:18.894083 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:18.894091 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:18.894096 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:18.898700 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:18.899363 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:19.393728 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:19.393752 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:19.393761 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:19.393765 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:19.397222 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:19.894334 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:19.894357 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:19.894363 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:19.894368 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:19.898082 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:20.393856 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:20.393886 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:20.393897 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:20.393902 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:20.397943 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:20.894483 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:20.894507 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:20.894515 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:20.894520 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:20.898814 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:20.899704 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:21.394169 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:21.394202 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:21.394223 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:21.394230 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:21.397956 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:21.893651 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:21.893677 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:21.893694 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:21.893698 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:21.898640 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:22.394576 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:22.394601 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:22.394608 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:22.394613 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:22.398389 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:22.894094 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:22.896048 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:22.896066 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:22.896071 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:22.900319 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:22.901080 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:23.393859 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:23.393884 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:23.393891 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:23.393895 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:23.397675 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:23.893653 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:23.893677 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:23.893686 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:23.893691 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:23.897607 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:24.393577 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:24.393603 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:24.393613 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:24.393617 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:24.397739 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:24.894603 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:24.894630 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:24.894642 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:24.894648 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:24.902724 1125718 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:46:24.903590 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:25.393878 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:25.393900 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:25.393909 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:25.393915 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:25.397677 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:25.893587 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:25.893611 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:25.893620 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:25.893624 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:25.899888 1125718 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:46:26.394601 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:26.394631 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:26.394642 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:26.394646 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:26.398580 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:26.893547 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:26.893575 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:26.893588 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:26.893595 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:26.897036 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:27.394252 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:27.394276 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.394285 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.394290 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.397689 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:27.398510 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:27.894262 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:27.895985 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.896001 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.896006 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.900414 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:27.901027 1125718 node_ready.go:49] node "ha-328109-m03" has status "Ready":"True"
	I0318 12:46:27.901046 1125718 node_ready.go:38] duration metric: took 36.007666077s for node "ha-328109-m03" to be "Ready" ...
	I0318 12:46:27.901056 1125718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:46:27.901124 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:46:27.901136 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.901143 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.901146 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.912020 1125718 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 12:46:27.919703 1125718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.919796 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c78nc
	I0318 12:46:27.919808 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.919815 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.919820 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.924016 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:27.924699 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:27.924718 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.924729 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.924737 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.928045 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:27.928631 1125718 pod_ready.go:92] pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:27.928651 1125718 pod_ready.go:81] duration metric: took 8.921172ms for pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.928665 1125718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.928725 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-p5xgj
	I0318 12:46:27.928736 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.928747 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.928757 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.932967 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:27.933505 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:27.933518 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.933524 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.933528 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.936811 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:27.937297 1125718 pod_ready.go:92] pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:27.937316 1125718 pod_ready.go:81] duration metric: took 8.643983ms for pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.937329 1125718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.937387 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109
	I0318 12:46:27.937398 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.937408 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.937415 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.940164 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:46:27.940975 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:27.940991 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.940998 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.941002 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.943543 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:46:27.944096 1125718 pod_ready.go:92] pod "etcd-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:27.944112 1125718 pod_ready.go:81] duration metric: took 6.777315ms for pod "etcd-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.944120 1125718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.944174 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m02
	I0318 12:46:27.944184 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.944190 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.944194 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.946826 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:46:27.947314 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:27.947331 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.947340 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.947346 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.951688 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:27.952268 1125718 pod_ready.go:92] pod "etcd-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:27.952283 1125718 pod_ready.go:81] duration metric: took 8.158107ms for pod "etcd-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.952290 1125718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:28.094895 1125718 request.go:629] Waited for 142.51346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m03
	I0318 12:46:28.094962 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m03
	I0318 12:46:28.094967 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:28.094975 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:28.094979 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:28.098905 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:28.294764 1125718 request.go:629] Waited for 195.314043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:28.294825 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:28.294831 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:28.294839 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:28.294843 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:28.299332 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:28.299969 1125718 pod_ready.go:92] pod "etcd-ha-328109-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:28.299988 1125718 pod_ready.go:81] duration metric: took 347.692389ms for pod "etcd-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:28.300005 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:28.495202 1125718 request.go:629] Waited for 195.124331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109
	I0318 12:46:28.495289 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109
	I0318 12:46:28.495301 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:28.495311 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:28.495321 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:28.499972 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:28.695142 1125718 request.go:629] Waited for 194.350739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:28.695234 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:28.695243 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:28.695251 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:28.695256 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:28.699376 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:28.700086 1125718 pod_ready.go:92] pod "kube-apiserver-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:28.700109 1125718 pod_ready.go:81] duration metric: took 400.092781ms for pod "kube-apiserver-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:28.700120 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:28.895212 1125718 request.go:629] Waited for 195.001042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m02
	I0318 12:46:28.895298 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m02
	I0318 12:46:28.895315 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:28.895331 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:28.895337 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:28.899477 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:29.094775 1125718 request.go:629] Waited for 194.368478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:29.094834 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:29.094839 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:29.094847 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:29.094851 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:29.098849 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:29.099342 1125718 pod_ready.go:92] pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:29.099363 1125718 pod_ready.go:81] duration metric: took 399.232111ms for pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:29.099377 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:29.294434 1125718 request.go:629] Waited for 194.941557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m03
	I0318 12:46:29.294512 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m03
	I0318 12:46:29.294520 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:29.294529 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:29.294534 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:29.298462 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:29.494811 1125718 request.go:629] Waited for 195.366462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:29.494877 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:29.494884 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:29.494895 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:29.494901 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:29.498913 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:29.499834 1125718 pod_ready.go:92] pod "kube-apiserver-ha-328109-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:29.499862 1125718 pod_ready.go:81] duration metric: took 400.476064ms for pod "kube-apiserver-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:29.499875 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:29.695031 1125718 request.go:629] Waited for 195.062315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109
	I0318 12:46:29.695124 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109
	I0318 12:46:29.695135 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:29.695146 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:29.695154 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:29.699023 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:29.895315 1125718 request.go:629] Waited for 195.40424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:29.895382 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:29.895388 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:29.895396 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:29.895400 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:29.899461 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:29.900374 1125718 pod_ready.go:92] pod "kube-controller-manager-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:29.900399 1125718 pod_ready.go:81] duration metric: took 400.516458ms for pod "kube-controller-manager-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:29.900409 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:30.094771 1125718 request.go:629] Waited for 194.261987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m02
	I0318 12:46:30.094857 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m02
	I0318 12:46:30.094868 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:30.094879 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:30.094888 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:30.099027 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:30.295235 1125718 request.go:629] Waited for 195.36728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:30.295307 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:30.295316 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:30.295332 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:30.295341 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:30.299497 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:30.300286 1125718 pod_ready.go:92] pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:30.300306 1125718 pod_ready.go:81] duration metric: took 399.891002ms for pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:30.300317 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:30.495111 1125718 request.go:629] Waited for 194.708476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m03
	I0318 12:46:30.495179 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m03
	I0318 12:46:30.495184 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:30.495192 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:30.495196 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:30.499703 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:30.694677 1125718 request.go:629] Waited for 194.395787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:30.694767 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:30.694777 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:30.694785 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:30.694792 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:30.698494 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:30.699143 1125718 pod_ready.go:92] pod "kube-controller-manager-ha-328109-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:30.699161 1125718 pod_ready.go:81] duration metric: took 398.835754ms for pod "kube-controller-manager-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:30.699172 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7zgrx" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:30.894301 1125718 request.go:629] Waited for 195.051197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7zgrx
	I0318 12:46:30.894396 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7zgrx
	I0318 12:46:30.894404 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:30.894416 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:30.894429 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:30.898413 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:31.094361 1125718 request.go:629] Waited for 195.290418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:31.094447 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:31.094458 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:31.094493 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:31.094506 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:31.098720 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:31.099202 1125718 pod_ready.go:92] pod "kube-proxy-7zgrx" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:31.099224 1125718 pod_ready.go:81] duration metric: took 400.046238ms for pod "kube-proxy-7zgrx" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:31.099234 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dhz88" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:31.295307 1125718 request.go:629] Waited for 195.990215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhz88
	I0318 12:46:31.295389 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhz88
	I0318 12:46:31.295397 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:31.295405 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:31.295412 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:31.299881 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:31.495320 1125718 request.go:629] Waited for 194.713319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:31.495409 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:31.495420 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:31.495432 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:31.495441 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:31.499776 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:31.500391 1125718 pod_ready.go:92] pod "kube-proxy-dhz88" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:31.500416 1125718 pod_ready.go:81] duration metric: took 401.173007ms for pod "kube-proxy-dhz88" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:31.500430 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zn8dk" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:31.694545 1125718 request.go:629] Waited for 194.035364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zn8dk
	I0318 12:46:31.694641 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zn8dk
	I0318 12:46:31.694653 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:31.694666 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:31.694684 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:31.698181 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:31.894615 1125718 request.go:629] Waited for 195.398032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:31.894681 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:31.894686 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:31.894693 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:31.894699 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:31.898750 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:31.899327 1125718 pod_ready.go:92] pod "kube-proxy-zn8dk" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:31.899348 1125718 pod_ready.go:81] duration metric: took 398.910077ms for pod "kube-proxy-zn8dk" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:31.899357 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:32.094508 1125718 request.go:629] Waited for 195.052309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109
	I0318 12:46:32.094581 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109
	I0318 12:46:32.094587 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:32.094599 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:32.094609 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:32.099402 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:32.294478 1125718 request.go:629] Waited for 194.277594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:32.294569 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:32.294576 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:32.294584 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:32.294588 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:32.298282 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:32.298724 1125718 pod_ready.go:92] pod "kube-scheduler-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:32.298742 1125718 pod_ready.go:81] duration metric: took 399.374733ms for pod "kube-scheduler-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:32.298753 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:32.494784 1125718 request.go:629] Waited for 195.934465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m02
	I0318 12:46:32.494886 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m02
	I0318 12:46:32.494897 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:32.494911 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:32.494923 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:32.498685 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:32.694556 1125718 request.go:629] Waited for 195.083041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:32.694630 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:32.694638 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:32.694650 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:32.694666 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:32.698773 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:32.699314 1125718 pod_ready.go:92] pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:32.699335 1125718 pod_ready.go:81] duration metric: took 400.576206ms for pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:32.699345 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:32.894300 1125718 request.go:629] Waited for 194.866034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m03
	I0318 12:46:32.896441 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m03
	I0318 12:46:32.896457 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:32.896468 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:32.896477 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:32.900486 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:33.094997 1125718 request.go:629] Waited for 193.426779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:33.095085 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:33.095104 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.095119 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.095140 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.099461 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:33.100162 1125718 pod_ready.go:92] pod "kube-scheduler-ha-328109-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:33.100187 1125718 pod_ready.go:81] duration metric: took 400.831673ms for pod "kube-scheduler-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:33.100204 1125718 pod_ready.go:38] duration metric: took 5.199137291s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:46:33.100234 1125718 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:46:33.100304 1125718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:46:33.123259 1125718 api_server.go:72] duration metric: took 41.494021932s to wait for apiserver process to appear ...
	I0318 12:46:33.123287 1125718 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:46:33.123313 1125718 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0318 12:46:33.129660 1125718 api_server.go:279] https://192.168.39.253:8443/healthz returned 200:
	ok
	I0318 12:46:33.129740 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/version
	I0318 12:46:33.129750 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.129761 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.129769 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.137451 1125718 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:46:33.137552 1125718 api_server.go:141] control plane version: v1.28.4
	I0318 12:46:33.137575 1125718 api_server.go:131] duration metric: took 14.279559ms to wait for apiserver health ...
	I0318 12:46:33.137586 1125718 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:46:33.294978 1125718 request.go:629] Waited for 157.313775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:46:33.295062 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:46:33.295068 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.295083 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.295094 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.302686 1125718 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:46:33.309075 1125718 system_pods.go:59] 24 kube-system pods found
	I0318 12:46:33.309104 1125718 system_pods.go:61] "coredns-5dd5756b68-c78nc" [7c1159dc-6545-41a6-bb4a-75fdab519c9e] Running
	I0318 12:46:33.309109 1125718 system_pods.go:61] "coredns-5dd5756b68-p5xgj" [9a865f86-96cf-4687-9283-d2ebe5616d1a] Running
	I0318 12:46:33.309113 1125718 system_pods.go:61] "etcd-ha-328109" [46530523-a048-4fff-897d-1a59630b5533] Running
	I0318 12:46:33.309116 1125718 system_pods.go:61] "etcd-ha-328109-m02" [0ed8ba4d-7da4-4c6c-b545-5e8642214659] Running
	I0318 12:46:33.309120 1125718 system_pods.go:61] "etcd-ha-328109-m03" [56631b93-b509-45de-9ee0-d1b9676f52fe] Running
	I0318 12:46:33.309123 1125718 system_pods.go:61] "kindnet-lc74t" [5fe4e41e-4ddd-4e39-b1e2-746a32489418] Running
	I0318 12:46:33.309125 1125718 system_pods.go:61] "kindnet-t2pkv" [d848dd56-4ea1-472a-b378-21e36c834f81] Running
	I0318 12:46:33.309128 1125718 system_pods.go:61] "kindnet-vnv5b" [fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6] Running
	I0318 12:46:33.309135 1125718 system_pods.go:61] "kube-apiserver-ha-328109" [47b1b8fb-21f6-43d7-a607-4406dfec10b7] Running
	I0318 12:46:33.309139 1125718 system_pods.go:61] "kube-apiserver-ha-328109-m02" [fcd48f5d-2278-49f3-b4f0-0cad9ae74dc7] Running
	I0318 12:46:33.309144 1125718 system_pods.go:61] "kube-apiserver-ha-328109-m03" [ad5b3068-7d65-4897-a31e-b0cb094d2678] Running
	I0318 12:46:33.309149 1125718 system_pods.go:61] "kube-controller-manager-ha-328109" [ffef70fe-841f-41c7-a61b-bb205ce2c071] Running
	I0318 12:46:33.309156 1125718 system_pods.go:61] "kube-controller-manager-ha-328109-m02" [a5ecf731-7599-44e9-b20d-924bde2de123] Running
	I0318 12:46:33.309169 1125718 system_pods.go:61] "kube-controller-manager-ha-328109-m03" [338747b6-dae1-4cfa-9e28-1892c2d39b86] Running
	I0318 12:46:33.309174 1125718 system_pods.go:61] "kube-proxy-7zgrx" [6244fa40-af4d-480b-9256-db89d78b1d74] Running
	I0318 12:46:33.309178 1125718 system_pods.go:61] "kube-proxy-dhz88" [afb0afad-2b88-4abb-9039-aaf9c64ad920] Running
	I0318 12:46:33.309183 1125718 system_pods.go:61] "kube-proxy-zn8dk" [16d8de0d-3270-4989-b77d-c15f6206b4d4] Running
	I0318 12:46:33.309192 1125718 system_pods.go:61] "kube-scheduler-ha-328109" [a32fb0b4-2621-47dd-bb05-abb2e4cf928e] Running
	I0318 12:46:33.309197 1125718 system_pods.go:61] "kube-scheduler-ha-328109-m02" [14246dc3-5f5f-4d43-954c-5959db738742] Running
	I0318 12:46:33.309203 1125718 system_pods.go:61] "kube-scheduler-ha-328109-m03" [de782d6a-c138-4f4e-b52b-e06ca1eb0735] Running
	I0318 12:46:33.309206 1125718 system_pods.go:61] "kube-vip-ha-328109" [40c45da5-33e0-454b-8f4c-eca1d1ec3362] Running
	I0318 12:46:33.309209 1125718 system_pods.go:61] "kube-vip-ha-328109-m02" [0c0dc71f-79d7-48f0-8a4a-4480521e5705] Running
	I0318 12:46:33.309212 1125718 system_pods.go:61] "kube-vip-ha-328109-m03" [98e75a0b-1e8b-481e-8eea-34b26ed1d38c] Running
	I0318 12:46:33.309216 1125718 system_pods.go:61] "storage-provisioner" [90ce7ae6-4ac4-4c14-b2df-1a182f4d8086] Running
	I0318 12:46:33.309224 1125718 system_pods.go:74] duration metric: took 171.628679ms to wait for pod list to return data ...
	I0318 12:46:33.309234 1125718 default_sa.go:34] waiting for default service account to be created ...
	I0318 12:46:33.494702 1125718 request.go:629] Waited for 185.332478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:46:33.494788 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:46:33.494796 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.494806 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.494817 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.498593 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:33.498723 1125718 default_sa.go:45] found service account: "default"
	I0318 12:46:33.498741 1125718 default_sa.go:55] duration metric: took 189.497941ms for default service account to be created ...
	I0318 12:46:33.498750 1125718 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 12:46:33.695198 1125718 request.go:629] Waited for 196.376373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:46:33.695267 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:46:33.695272 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.695280 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.695286 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.703736 1125718 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:46:33.710666 1125718 system_pods.go:86] 24 kube-system pods found
	I0318 12:46:33.710697 1125718 system_pods.go:89] "coredns-5dd5756b68-c78nc" [7c1159dc-6545-41a6-bb4a-75fdab519c9e] Running
	I0318 12:46:33.710706 1125718 system_pods.go:89] "coredns-5dd5756b68-p5xgj" [9a865f86-96cf-4687-9283-d2ebe5616d1a] Running
	I0318 12:46:33.710710 1125718 system_pods.go:89] "etcd-ha-328109" [46530523-a048-4fff-897d-1a59630b5533] Running
	I0318 12:46:33.710714 1125718 system_pods.go:89] "etcd-ha-328109-m02" [0ed8ba4d-7da4-4c6c-b545-5e8642214659] Running
	I0318 12:46:33.710718 1125718 system_pods.go:89] "etcd-ha-328109-m03" [56631b93-b509-45de-9ee0-d1b9676f52fe] Running
	I0318 12:46:33.710722 1125718 system_pods.go:89] "kindnet-lc74t" [5fe4e41e-4ddd-4e39-b1e2-746a32489418] Running
	I0318 12:46:33.710726 1125718 system_pods.go:89] "kindnet-t2pkv" [d848dd56-4ea1-472a-b378-21e36c834f81] Running
	I0318 12:46:33.710730 1125718 system_pods.go:89] "kindnet-vnv5b" [fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6] Running
	I0318 12:46:33.710734 1125718 system_pods.go:89] "kube-apiserver-ha-328109" [47b1b8fb-21f6-43d7-a607-4406dfec10b7] Running
	I0318 12:46:33.710739 1125718 system_pods.go:89] "kube-apiserver-ha-328109-m02" [fcd48f5d-2278-49f3-b4f0-0cad9ae74dc7] Running
	I0318 12:46:33.710745 1125718 system_pods.go:89] "kube-apiserver-ha-328109-m03" [ad5b3068-7d65-4897-a31e-b0cb094d2678] Running
	I0318 12:46:33.710751 1125718 system_pods.go:89] "kube-controller-manager-ha-328109" [ffef70fe-841f-41c7-a61b-bb205ce2c071] Running
	I0318 12:46:33.710758 1125718 system_pods.go:89] "kube-controller-manager-ha-328109-m02" [a5ecf731-7599-44e9-b20d-924bde2de123] Running
	I0318 12:46:33.710772 1125718 system_pods.go:89] "kube-controller-manager-ha-328109-m03" [338747b6-dae1-4cfa-9e28-1892c2d39b86] Running
	I0318 12:46:33.710778 1125718 system_pods.go:89] "kube-proxy-7zgrx" [6244fa40-af4d-480b-9256-db89d78b1d74] Running
	I0318 12:46:33.710787 1125718 system_pods.go:89] "kube-proxy-dhz88" [afb0afad-2b88-4abb-9039-aaf9c64ad920] Running
	I0318 12:46:33.710791 1125718 system_pods.go:89] "kube-proxy-zn8dk" [16d8de0d-3270-4989-b77d-c15f6206b4d4] Running
	I0318 12:46:33.710795 1125718 system_pods.go:89] "kube-scheduler-ha-328109" [a32fb0b4-2621-47dd-bb05-abb2e4cf928e] Running
	I0318 12:46:33.710799 1125718 system_pods.go:89] "kube-scheduler-ha-328109-m02" [14246dc3-5f5f-4d43-954c-5959db738742] Running
	I0318 12:46:33.710803 1125718 system_pods.go:89] "kube-scheduler-ha-328109-m03" [de782d6a-c138-4f4e-b52b-e06ca1eb0735] Running
	I0318 12:46:33.710808 1125718 system_pods.go:89] "kube-vip-ha-328109" [40c45da5-33e0-454b-8f4c-eca1d1ec3362] Running
	I0318 12:46:33.710814 1125718 system_pods.go:89] "kube-vip-ha-328109-m02" [0c0dc71f-79d7-48f0-8a4a-4480521e5705] Running
	I0318 12:46:33.710818 1125718 system_pods.go:89] "kube-vip-ha-328109-m03" [98e75a0b-1e8b-481e-8eea-34b26ed1d38c] Running
	I0318 12:46:33.710821 1125718 system_pods.go:89] "storage-provisioner" [90ce7ae6-4ac4-4c14-b2df-1a182f4d8086] Running
	I0318 12:46:33.710828 1125718 system_pods.go:126] duration metric: took 212.070029ms to wait for k8s-apps to be running ...
	I0318 12:46:33.710838 1125718 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:46:33.710895 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:46:33.735959 1125718 system_svc.go:56] duration metric: took 25.107366ms WaitForService to wait for kubelet
	I0318 12:46:33.735996 1125718 kubeadm.go:576] duration metric: took 42.106764853s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:46:33.736026 1125718 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:46:33.894369 1125718 request.go:629] Waited for 158.246653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes
	I0318 12:46:33.894428 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes
	I0318 12:46:33.894433 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.894442 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.894446 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.898524 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:33.900095 1125718 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:46:33.900119 1125718 node_conditions.go:123] node cpu capacity is 2
	I0318 12:46:33.900134 1125718 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:46:33.900140 1125718 node_conditions.go:123] node cpu capacity is 2
	I0318 12:46:33.900146 1125718 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:46:33.900155 1125718 node_conditions.go:123] node cpu capacity is 2
	I0318 12:46:33.900161 1125718 node_conditions.go:105] duration metric: took 164.129313ms to run NodePressure ...
	I0318 12:46:33.900183 1125718 start.go:240] waiting for startup goroutines ...
	I0318 12:46:33.900240 1125718 start.go:254] writing updated cluster config ...
	I0318 12:46:33.900608 1125718 ssh_runner.go:195] Run: rm -f paused
	I0318 12:46:33.955240 1125718 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 12:46:33.957847 1125718 out.go:177] * Done! kubectl is now configured to use "ha-328109" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.797368382Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766207797343614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0870b2e-6172-4eba-a6df-b934d4efe227 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.798503890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=78300ae9-e9fe-4cc9-b3a8-3d79c15d06e0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.798558750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=78300ae9-e9fe-4cc9-b3a8-3d79c15d06e0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.798838791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710765998607402140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b630b0fc05d4dd89718593f42880e41e071014b4d0f87791cba4fbf8cbe8785,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710765878638455375,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742842736e1b52735c8a18b3d61ed7ee1d6157f2ca03ec317995f36597c45ac6,PodSandboxId:d0fc4bb142f1e67adc1acb0fd05ed7615c6e71bf4d9c199240d1b14c7e506c6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765818180996776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818122582469,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818091864321,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41509d172d09cc4eecee3746dfb8ee1d320fc1c3797ddb1d709f61a48d8c377,PodSandboxId:de3686de3774df02e905a63c9a2f6c340478fd958e65a20db5acf3d838e7c03d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710765816486
274205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765812830557764,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d088b41ebc7b89cfc02aea70859e94e5a45b788a9c73a939733131ae29c4462,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710765794897477578,Labels:map[string]string{io.kub
ernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765791394253878,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.
pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765791364211319,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328
109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef,PodSandboxId:0d6a5a565490ec7bc679e6f77a039f680f53470f17b0cc60629e1ea627d8141e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765791361646729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-
ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a,PodSandboxId:a0c6f1dda955fa31cf1b04ce5ce4401c9c2bfef118b3bbaea519a53ffc2f3257,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765791299905420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=78300ae9-e9fe-4cc9-b3a8-3d79c15d06e0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.844997661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4966cf26-0f36-455d-8049-fbc57d38ebea name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.845169396Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4966cf26-0f36-455d-8049-fbc57d38ebea name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.848841346Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=459d14c4-19a5-4427-8ccd-ed0ce5cf65af name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.849551296Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766207849528198,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=459d14c4-19a5-4427-8ccd-ed0ce5cf65af name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.850285386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2dea2ed-bafe-4b54-95cf-6d8644aa1cd3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.850374895Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2dea2ed-bafe-4b54-95cf-6d8644aa1cd3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.850881305Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710765998607402140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b630b0fc05d4dd89718593f42880e41e071014b4d0f87791cba4fbf8cbe8785,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710765878638455375,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742842736e1b52735c8a18b3d61ed7ee1d6157f2ca03ec317995f36597c45ac6,PodSandboxId:d0fc4bb142f1e67adc1acb0fd05ed7615c6e71bf4d9c199240d1b14c7e506c6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765818180996776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818122582469,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818091864321,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41509d172d09cc4eecee3746dfb8ee1d320fc1c3797ddb1d709f61a48d8c377,PodSandboxId:de3686de3774df02e905a63c9a2f6c340478fd958e65a20db5acf3d838e7c03d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710765816486
274205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765812830557764,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d088b41ebc7b89cfc02aea70859e94e5a45b788a9c73a939733131ae29c4462,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710765794897477578,Labels:map[string]string{io.kub
ernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765791394253878,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.
pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765791364211319,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328
109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef,PodSandboxId:0d6a5a565490ec7bc679e6f77a039f680f53470f17b0cc60629e1ea627d8141e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765791361646729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-
ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a,PodSandboxId:a0c6f1dda955fa31cf1b04ce5ce4401c9c2bfef118b3bbaea519a53ffc2f3257,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765791299905420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2dea2ed-bafe-4b54-95cf-6d8644aa1cd3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.897648783Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9ed524db-2c15-4055-8148-23d4a4eb965c name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.897733382Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9ed524db-2c15-4055-8148-23d4a4eb965c name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.902685685Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cce35f7d-ef91-4bc0-9172-6cbbd65cfe5d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.903238244Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766207903215987,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cce35f7d-ef91-4bc0-9172-6cbbd65cfe5d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.903987249Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dc0a9d76-6c12-4328-803f-2af1f2bfeebb name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.904287891Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dc0a9d76-6c12-4328-803f-2af1f2bfeebb name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.905028392Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710765998607402140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b630b0fc05d4dd89718593f42880e41e071014b4d0f87791cba4fbf8cbe8785,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710765878638455375,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742842736e1b52735c8a18b3d61ed7ee1d6157f2ca03ec317995f36597c45ac6,PodSandboxId:d0fc4bb142f1e67adc1acb0fd05ed7615c6e71bf4d9c199240d1b14c7e506c6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765818180996776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818122582469,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818091864321,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41509d172d09cc4eecee3746dfb8ee1d320fc1c3797ddb1d709f61a48d8c377,PodSandboxId:de3686de3774df02e905a63c9a2f6c340478fd958e65a20db5acf3d838e7c03d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710765816486
274205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765812830557764,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d088b41ebc7b89cfc02aea70859e94e5a45b788a9c73a939733131ae29c4462,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710765794897477578,Labels:map[string]string{io.kub
ernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765791394253878,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.
pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765791364211319,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328
109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef,PodSandboxId:0d6a5a565490ec7bc679e6f77a039f680f53470f17b0cc60629e1ea627d8141e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765791361646729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-
ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a,PodSandboxId:a0c6f1dda955fa31cf1b04ce5ce4401c9c2bfef118b3bbaea519a53ffc2f3257,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765791299905420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dc0a9d76-6c12-4328-803f-2af1f2bfeebb name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.952669960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6596aacd-7660-4a9d-9d3b-8decd3e397d1 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.952752903Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6596aacd-7660-4a9d-9d3b-8decd3e397d1 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.959386476Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=955ddff7-1523-4e14-a9fa-e6a59e7b4886 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.962841429Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766207962754842,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=955ddff7-1523-4e14-a9fa-e6a59e7b4886 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.964258343Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a0128278-94e7-4dad-b74c-52330cad1780 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.964349688Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a0128278-94e7-4dad-b74c-52330cad1780 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:50:07 ha-328109 crio[681]: time="2024-03-18 12:50:07.964598591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710765998607402140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b630b0fc05d4dd89718593f42880e41e071014b4d0f87791cba4fbf8cbe8785,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710765878638455375,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742842736e1b52735c8a18b3d61ed7ee1d6157f2ca03ec317995f36597c45ac6,PodSandboxId:d0fc4bb142f1e67adc1acb0fd05ed7615c6e71bf4d9c199240d1b14c7e506c6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765818180996776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818122582469,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818091864321,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41509d172d09cc4eecee3746dfb8ee1d320fc1c3797ddb1d709f61a48d8c377,PodSandboxId:de3686de3774df02e905a63c9a2f6c340478fd958e65a20db5acf3d838e7c03d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710765816486
274205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765812830557764,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d088b41ebc7b89cfc02aea70859e94e5a45b788a9c73a939733131ae29c4462,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710765794897477578,Labels:map[string]string{io.kub
ernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765791394253878,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.
pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765791364211319,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328
109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef,PodSandboxId:0d6a5a565490ec7bc679e6f77a039f680f53470f17b0cc60629e1ea627d8141e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765791361646729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-
ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a,PodSandboxId:a0c6f1dda955fa31cf1b04ce5ce4401c9c2bfef118b3bbaea519a53ffc2f3257,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765791299905420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a0128278-94e7-4dad-b74c-52330cad1780 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c5b3318798546       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   10b35c5d18ac5       busybox-5b5d89c9d6-fz4kl
	0b630b0fc05d4       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  1                   2f84d6cd36a0e       kube-vip-ha-328109
	742842736e1b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   d0fc4bb142f1e       storage-provisioner
	82a8d2ac6a60c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   b487ae421169c       coredns-5dd5756b68-p5xgj
	f2c5cd4a72423       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   16503713d1986       coredns-5dd5756b68-c78nc
	f41509d172d09       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    6 minutes ago       Running             kindnet-cni               0                   de3686de3774d       kindnet-vnv5b
	f8d915a384e6a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      6 minutes ago       Running             kube-proxy                0                   35275a602be1c       kube-proxy-dhz88
	8d088b41ebc7b       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Exited              kube-vip                  0                   2f84d6cd36a0e       kube-vip-ha-328109
	55e393cf77a1b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      6 minutes ago       Running             etcd                      0                   8231d33571b5e       etcd-ha-328109
	de552ed42d495       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      6 minutes ago       Running             kube-scheduler            0                   8cfa0459c6e2a       kube-scheduler-ha-328109
	a10929bb97372       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      6 minutes ago       Running             kube-controller-manager   0                   0d6a5a565490e       kube-controller-manager-ha-328109
	7e2150d8010e2       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      6 minutes ago       Running             kube-apiserver            0                   a0c6f1dda955f       kube-apiserver-ha-328109
	
	
	==> coredns [82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135] <==
	[INFO] 10.244.0.4:52673 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004169921s
	[INFO] 10.244.0.4:48925 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253358s
	[INFO] 10.244.0.4:56631 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152161s
	[INFO] 10.244.0.4:45190 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103827s
	[INFO] 10.244.2.2:34185 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105521s
	[INFO] 10.244.2.2:44888 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000730863s
	[INFO] 10.244.1.2:40647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166359s
	[INFO] 10.244.1.2:57968 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001882507s
	[INFO] 10.244.1.2:55297 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096788s
	[INFO] 10.244.1.2:36989 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088322s
	[INFO] 10.244.1.2:37677 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205894s
	[INFO] 10.244.1.2:32814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074605s
	[INFO] 10.244.1.2:44489 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102528s
	[INFO] 10.244.0.4:53607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206955s
	[INFO] 10.244.2.2:47974 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000313502s
	[INFO] 10.244.1.2:49641 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193514s
	[INFO] 10.244.1.2:52193 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126417s
	[INFO] 10.244.1.2:55887 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104434s
	[INFO] 10.244.0.4:43288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014747s
	[INFO] 10.244.0.4:57574 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192178s
	[INFO] 10.244.0.4:58440 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128408s
	[INFO] 10.244.2.2:50297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168343s
	[INFO] 10.244.2.2:37188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133774s
	[INFO] 10.244.1.2:33883 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095091s
	[INFO] 10.244.1.2:45785 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123693s
	
	
	==> coredns [f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562] <==
	[INFO] 10.244.2.2:51093 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001672821s
	[INFO] 10.244.1.2:49953 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000114538s
	[INFO] 10.244.0.4:45239 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117638s
	[INFO] 10.244.0.4:54630 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003946825s
	[INFO] 10.244.0.4:37807 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185941s
	[INFO] 10.244.0.4:54881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000227886s
	[INFO] 10.244.2.2:43048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261065s
	[INFO] 10.244.2.2:43023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001686526s
	[INFO] 10.244.2.2:59097 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204051s
	[INFO] 10.244.2.2:49621 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000262805s
	[INFO] 10.244.2.2:48119 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371219s
	[INFO] 10.244.2.2:49912 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148592s
	[INFO] 10.244.1.2:60652 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0016374s
	[INFO] 10.244.0.4:55891 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079534s
	[INFO] 10.244.0.4:53025 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231262s
	[INFO] 10.244.0.4:39659 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116818s
	[INFO] 10.244.2.2:48403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125802s
	[INFO] 10.244.2.2:42106 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092079s
	[INFO] 10.244.2.2:41088 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000204572s
	[INFO] 10.244.1.2:60379 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108875s
	[INFO] 10.244.0.4:42381 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008263s
	[INFO] 10.244.2.2:47207 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000237181s
	[INFO] 10.244.2.2:44002 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102925s
	[INFO] 10.244.1.2:54332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126486s
	[INFO] 10.244.1.2:38590 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000245357s
	
	
	==> describe nodes <==
	Name:               ha-328109
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_43_22_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:43:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:50:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:46:56 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:46:56 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:46:56 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:46:56 +0000   Mon, 18 Mar 2024 12:43:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-328109
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8b3a9b95f2141b891e3cee14aaad62e
	  System UUID:                a8b3a9b9-5f21-41b8-91e3-cee14aaad62e
	  Boot ID:                    906b8684-634a-4838-bb8e-d090694f9649
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-fz4kl             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 coredns-5dd5756b68-c78nc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m37s
	  kube-system                 coredns-5dd5756b68-p5xgj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m37s
	  kube-system                 etcd-ha-328109                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m47s
	  kube-system                 kindnet-vnv5b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m37s
	  kube-system                 kube-apiserver-ha-328109             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 kube-controller-manager-ha-328109    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 kube-proxy-dhz88                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 kube-scheduler-ha-328109             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m49s
	  kube-system                 kube-vip-ha-328109                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m47s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m35s                  kube-proxy       
	  Normal  NodeHasSufficientPID     6m58s (x7 over 6m58s)  kubelet          Node ha-328109 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 6m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m58s (x8 over 6m58s)  kubelet          Node ha-328109 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m58s (x8 over 6m58s)  kubelet          Node ha-328109 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m47s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m47s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m47s                  kubelet          Node ha-328109 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m47s                  kubelet          Node ha-328109 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m47s                  kubelet          Node ha-328109 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m38s                  node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal  NodeReady                6m31s                  kubelet          Node ha-328109 status is now: NodeReady
	  Normal  RegisteredNode           5m16s                  node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal  RegisteredNode           4m2s                   node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	
	
	Name:               ha-328109-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_44_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:44:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:47:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 12:47:01 +0000   Mon, 18 Mar 2024 12:48:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 12:47:01 +0000   Mon, 18 Mar 2024 12:48:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 12:47:01 +0000   Mon, 18 Mar 2024 12:48:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 12:47:01 +0000   Mon, 18 Mar 2024 12:48:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-328109-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 148457ca2d4c4c78bdc5b74dba85e93e
	  System UUID:                148457ca-2d4c-4c78-bdc5-b74dba85e93e
	  Boot ID:                    8d0cadc9-1888-4de4-9f61-a20e3052d92f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-sx4mf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 etcd-ha-328109-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m29s
	  kube-system                 kindnet-lc74t                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m40s
	  kube-system                 kube-apiserver-ha-328109-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  kube-system                 kube-controller-manager-ha-328109-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m31s
	  kube-system                 kube-proxy-7zgrx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m40s
	  kube-system                 kube-scheduler-ha-328109-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	  kube-system                 kube-vip-ha-328109-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m27s  kube-proxy       
	  Normal  RegisteredNode  5m16s  node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  RegisteredNode  4m2s   node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  NodeNotReady    103s   node-controller  Node ha-328109-m02 status is now: NodeNotReady
	
	
	Name:               ha-328109-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_45_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:45:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:50:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:46:49 +0000   Mon, 18 Mar 2024 12:45:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:46:49 +0000   Mon, 18 Mar 2024 12:45:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:46:49 +0000   Mon, 18 Mar 2024 12:45:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:46:49 +0000   Mon, 18 Mar 2024 12:46:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-328109-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 4fab87c426444aa8b3b6e0542502fa6e
	  System UUID:                4fab87c4-2644-4aa8-b3b6-e0542502fa6e
	  Boot ID:                    b800ccda-6ae3-43fc-9ff4-4f258fdf7181
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-gv6tf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m34s
	  kube-system                 etcd-ha-328109-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m19s
	  kube-system                 kindnet-t2pkv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m20s
	  kube-system                 kube-apiserver-ha-328109-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-controller-manager-ha-328109-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m19s
	  kube-system                 kube-proxy-zn8dk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-scheduler-ha-328109-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m14s
	  kube-system                 kube-vip-ha-328109-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m3s   kube-proxy       
	  Normal  RegisteredNode  4m18s  node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	  Normal  RegisteredNode  4m16s  node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	  Normal  RegisteredNode  4m2s   node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	
	
	Name:               ha-328109-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_47_16_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:47:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:49:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:47:46 +0000   Mon, 18 Mar 2024 12:47:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:47:46 +0000   Mon, 18 Mar 2024 12:47:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:47:46 +0000   Mon, 18 Mar 2024 12:47:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:47:46 +0000   Mon, 18 Mar 2024 12:47:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-328109-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac08f798ce4148b48f36040f95b7eaf9
	  System UUID:                ac08f798-ce41-48b4-8f36-040f95b7eaf9
	  Boot ID:                    3d12ce0a-9b18-44af-8f5b-5098664adc80
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ggcw6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m52s
	  kube-system                 kube-proxy-4fxbn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m53s (x5 over 2m54s)  kubelet          Node ha-328109-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m53s (x5 over 2m54s)  kubelet          Node ha-328109-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m53s (x5 over 2m54s)  kubelet          Node ha-328109-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m52s                  node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal  RegisteredNode           2m51s                  node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal  RegisteredNode           2m48s                  node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal  NodeReady                2m43s                  kubelet          Node ha-328109-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar18 12:42] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052749] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044694] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.610763] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.534660] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.648943] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.065651] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.058901] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058875] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.159253] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.141446] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.251865] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[Mar18 12:43] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.059542] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.985090] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +1.363754] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.738793] kauditd_printk_skb: 40 callbacks suppressed
	[  +1.856189] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[ +11.678244] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.089183] kauditd_printk_skb: 37 callbacks suppressed
	[Mar18 12:44] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205] <==
	{"level":"warn","ts":"2024-03-18T12:50:08.26683Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.271862Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.28601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.296452Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.306142Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.310948Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.315851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.325742Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.34392Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.34523Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.386396Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.397008Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.401631Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.409373Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.41228Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.422347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.442372Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.448167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.467016Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.471247Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.475309Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.481886Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.488985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.495845Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:50:08.544999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:50:08 up 7 min,  0 users,  load average: 0.06, 0.21, 0.15
	Linux ha-328109 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f41509d172d09cc4eecee3746dfb8ee1d320fc1c3797ddb1d709f61a48d8c377] <==
	I0318 12:49:38.557349       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:49:48.567525       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:49:48.567569       1 main.go:227] handling current node
	I0318 12:49:48.567579       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:49:48.567586       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:49:48.567718       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0318 12:49:48.567752       1 main.go:250] Node ha-328109-m03 has CIDR [10.244.2.0/24] 
	I0318 12:49:48.567807       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:49:48.567839       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:49:58.579321       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:49:58.579383       1 main.go:227] handling current node
	I0318 12:49:58.579398       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:49:58.579406       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:49:58.579573       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0318 12:49:58.579616       1 main.go:250] Node ha-328109-m03 has CIDR [10.244.2.0/24] 
	I0318 12:49:58.579697       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:49:58.579713       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:50:08.597496       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:50:08.597738       1 main.go:227] handling current node
	I0318 12:50:08.597769       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:50:08.597886       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:50:08.598504       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0318 12:50:08.598697       1 main.go:250] Node ha-328109-m03 has CIDR [10.244.2.0/24] 
	I0318 12:50:08.598967       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:50:08.599188       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a] <==
	I0318 12:44:38.132630       1 trace.go:236] Trace[1342520798]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:01966c69-0057-4e3f-82ee-de024a8d9bba,client:192.168.39.254,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-328109,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 12:44:32.520) (total time: 5612ms):
	Trace[1342520798]: ["GuaranteedUpdate etcd3" audit-id:01966c69-0057-4e3f-82ee-de024a8d9bba,key:/leases/kube-node-lease/ha-328109,type:*coordination.Lease,resource:leases.coordination.k8s.io 5611ms (12:44:32.520)
	Trace[1342520798]:  ---"Txn call completed" 5611ms (12:44:38.132)]
	Trace[1342520798]: [5.612031088s] [5.612031088s] END
	I0318 12:44:38.132721       1 trace.go:236] Trace[624459832]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b5c533b5-cc1b-497e-b2ba-30e994190195,client:192.168.39.254,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 12:44:34.414) (total time: 3717ms):
	Trace[624459832]: ["Create etcd3" audit-id:b5c533b5-cc1b-497e-b2ba-30e994190195,key:/events/kube-system/kube-apiserver-ha-328109.17bddc7fae054a4d,type:*core.Event,resource:events 3717ms (12:44:34.415)
	Trace[624459832]:  ---"Txn call succeeded" 3717ms (12:44:38.132)]
	Trace[624459832]: [3.717787534s] [3.717787534s] END
	I0318 12:44:38.132842       1 trace.go:236] Trace[962055501]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a1297aa5-524a-4a72-85b9-9417e2477763,client:192.168.39.246,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 12:44:36.190) (total time: 1942ms):
	Trace[962055501]: ["Create etcd3" audit-id:a1297aa5-524a-4a72-85b9-9417e2477763,key:/events/kube-system/etcd-ha-328109-m02.17bddc7e7d5ff8bf,type:*core.Event,resource:events 1942ms (12:44:36.190)
	Trace[962055501]:  ---"Txn call succeeded" 1942ms (12:44:38.132)]
	Trace[962055501]: [1.942762555s] [1.942762555s] END
	I0318 12:44:38.135895       1 trace.go:236] Trace[1717943839]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:5543fdf2-ef41-4b60-8ac5-d880490b9c10,client:192.168.39.246,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 12:44:33.356) (total time: 4779ms):
	Trace[1717943839]: ["Create etcd3" audit-id:5543fdf2-ef41-4b60-8ac5-d880490b9c10,key:/pods/kube-system/kube-apiserver-ha-328109-m02,type:*core.Pod,resource:pods 4778ms (12:44:33.357)
	Trace[1717943839]:  ---"Txn call succeeded" 4773ms (12:44:38.130)]
	Trace[1717943839]: [4.779493649s] [4.779493649s] END
	I0318 12:44:38.136224       1 trace.go:236] Trace[1951338847]: "List" accept:application/json, */*,audit-id:4c149973-5734-4872-8981-83a6b3baae31,client:192.168.39.253,protocol:HTTP/2.0,resource:nodes,scope:cluster,url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (18-Mar-2024 12:44:37.330) (total time: 805ms):
	Trace[1951338847]: ["List(recursive=true) etcd3" audit-id:4c149973-5734-4872-8981-83a6b3baae31,key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: 805ms (12:44:37.330)]
	Trace[1951338847]: [805.489973ms] [805.489973ms] END
	I0318 12:44:38.220782       1 trace.go:236] Trace[396697757]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:dbbe0d76-3db9-4e4d-9bf4-58fc51de9768,client:192.168.39.246,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 12:44:37.116) (total time: 1103ms):
	Trace[396697757]: [1.103969292s] [1.103969292s] END
	I0318 12:44:38.229736       1 trace.go:236] Trace[1623968541]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.253,type:*v1.Endpoints,resource:apiServerIPInfo (18-Mar-2024 12:44:36.707) (total time: 1522ms):
	Trace[1623968541]: ---"initial value restored" 1424ms (12:44:38.131)
	Trace[1623968541]: ---"Transaction prepared" 58ms (12:44:38.190)
	Trace[1623968541]: [1.522252407s] [1.522252407s] END
	
	
	==> kube-controller-manager [a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef] <==
	E0318 12:47:14.299000       1 certificate_controller.go:146] Sync csr-lxffk failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-lxffk": the object has been modified; please apply your changes to the latest version and try again
	I0318 12:47:15.811781       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-328109-m04\" does not exist"
	I0318 12:47:15.897387       1 range_allocator.go:380] "Set node PodCIDR" node="ha-328109-m04" podCIDRs=["10.244.3.0/24"]
	I0318 12:47:15.979277       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ssq4l"
	I0318 12:47:15.979578       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-czqfw"
	I0318 12:47:16.153839       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-ssq4l"
	I0318 12:47:16.181270       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-czqfw"
	I0318 12:47:16.725872       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pgzmj"
	I0318 12:47:16.828489       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-x4dsb"
	I0318 12:47:16.880172       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-pgzmj"
	I0318 12:47:20.660548       1 event.go:307] "Event occurred" object="ha-328109-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller"
	I0318 12:47:20.674516       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-328109-m04"
	I0318 12:47:25.743602       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-328109-m04"
	I0318 12:48:25.702748       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-328109-m04"
	I0318 12:48:25.705744       1 event.go:307] "Event occurred" object="ha-328109-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-328109-m02 status is now: NodeNotReady"
	I0318 12:48:25.738448       1 event.go:307] "Event occurred" object="kube-system/kindnet-lc74t" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.769028       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-7zgrx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.783574       1 event.go:307] "Event occurred" object="kube-system/kube-vip-ha-328109-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.815775       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-ha-328109-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.830749       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-ha-328109-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.846414       1 event.go:307] "Event occurred" object="kube-system/etcd-ha-328109-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.863321       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-ha-328109-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.877211       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-sx4mf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.894485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.539547ms"
	I0318 12:48:25.895906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="125.045µs"
	
	
	==> kube-proxy [f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6] <==
	I0318 12:43:33.033214       1 server_others.go:69] "Using iptables proxy"
	I0318 12:43:33.058459       1 node.go:141] Successfully retrieved node IP: 192.168.39.253
	I0318 12:43:33.105382       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:43:33.105433       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:43:33.107947       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:43:33.108833       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:43:33.109182       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:43:33.109219       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:43:33.110865       1 config.go:188] "Starting service config controller"
	I0318 12:43:33.111906       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:43:33.112178       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:43:33.112212       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:43:33.114910       1 config.go:315] "Starting node config controller"
	I0318 12:43:33.114956       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:43:33.212384       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:43:33.212446       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:43:33.215649       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6] <==
	I0318 12:47:16.005054       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ssq4l" node="ha-328109-m04"
	E0318 12:47:16.015320       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-czqfw\": pod kindnet-czqfw is already assigned to node \"ha-328109-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-czqfw" node="ha-328109-m04"
	E0318 12:47:16.015461       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 5410e336-df2d-47f5-bed4-f8c92278a1a6(kube-system/kindnet-czqfw) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.015499       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-czqfw\": pod kindnet-czqfw is already assigned to node \"ha-328109-m04\"" pod="kube-system/kindnet-czqfw"
	I0318 12:47:16.015529       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-czqfw" node="ha-328109-m04"
	E0318 12:47:16.034281       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4fxbn\": pod kube-proxy-4fxbn is already assigned to node \"ha-328109-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4fxbn" node="ha-328109-m04"
	E0318 12:47:16.034546       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod d1f2a6c1-8e3c-45ad-8839-d641a80a4d03(kube-system/kube-proxy-4fxbn) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.034695       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4fxbn\": pod kube-proxy-4fxbn is already assigned to node \"ha-328109-m04\"" pod="kube-system/kube-proxy-4fxbn"
	I0318 12:47:16.034757       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4fxbn" node="ha-328109-m04"
	E0318 12:47:16.035526       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-m2qh7\": pod kindnet-m2qh7 is already assigned to node \"ha-328109-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-m2qh7" node="ha-328109-m04"
	E0318 12:47:16.035608       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 3abff82c-01b1-4ef2-b5e4-ef9ea8642d5b(kube-system/kindnet-m2qh7) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.035636       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-m2qh7\": pod kindnet-m2qh7 is already assigned to node \"ha-328109-m04\"" pod="kube-system/kindnet-m2qh7"
	I0318 12:47:16.035687       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-m2qh7" node="ha-328109-m04"
	E0318 12:47:16.798722       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x4dsb\": pod kindnet-x4dsb is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-x4dsb" node="ha-328109-m04"
	E0318 12:47:16.798797       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 8988ac73-5185-4ef3-a282-6982e6f09c9d(kube-system/kindnet-x4dsb) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.798824       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x4dsb\": pod kindnet-x4dsb is being deleted, cannot be assigned to a host" pod="kube-system/kindnet-x4dsb"
	I0318 12:47:16.798841       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x4dsb" node="ha-328109-m04"
	E0318 12:47:16.799239       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pgzmj\": pod kindnet-pgzmj is already assigned to node \"ha-328109-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pgzmj" node="ha-328109-m04"
	E0318 12:47:16.799295       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod ec2c8ced-e089-4868-920f-c77eaa97ccca(kube-system/kindnet-pgzmj) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.799313       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pgzmj\": pod kindnet-pgzmj is already assigned to node \"ha-328109-m04\"" pod="kube-system/kindnet-pgzmj"
	I0318 12:47:16.799327       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pgzmj" node="ha-328109-m04"
	E0318 12:47:16.800987       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ggcw6\": pod kindnet-ggcw6 is already assigned to node \"ha-328109-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ggcw6" node="ha-328109-m04"
	E0318 12:47:16.805150       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod da0dab40-34a4-4213-9224-b1bef5273e51(kube-system/kindnet-ggcw6) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.805941       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ggcw6\": pod kindnet-ggcw6 is already assigned to node \"ha-328109-m04\"" pod="kube-system/kindnet-ggcw6"
	I0318 12:47:16.806190       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ggcw6" node="ha-328109-m04"
	
	
	==> kubelet <==
	Mar 18 12:45:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:45:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:45:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:46:21 ha-328109 kubelet[1375]: E0318 12:46:21.242351    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:46:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:46:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:46:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:46:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:46:35 ha-328109 kubelet[1375]: I0318 12:46:35.061808    1375 topology_manager.go:215] "Topology Admit Handler" podUID="5a0215bb-df62-44b9-9d60-d45778880b8b" podNamespace="default" podName="busybox-5b5d89c9d6-fz4kl"
	Mar 18 12:46:35 ha-328109 kubelet[1375]: I0318 12:46:35.224542    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4twgx\" (UniqueName: \"kubernetes.io/projected/5a0215bb-df62-44b9-9d60-d45778880b8b-kube-api-access-4twgx\") pod \"busybox-5b5d89c9d6-fz4kl\" (UID: \"5a0215bb-df62-44b9-9d60-d45778880b8b\") " pod="default/busybox-5b5d89c9d6-fz4kl"
	Mar 18 12:47:21 ha-328109 kubelet[1375]: E0318 12:47:21.240726    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:47:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:47:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:47:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:47:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:48:21 ha-328109 kubelet[1375]: E0318 12:48:21.240761    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:48:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:48:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:48:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:48:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:49:21 ha-328109 kubelet[1375]: E0318 12:49:21.241837    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:49:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:49:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:49:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:49:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-328109 -n ha-328109
helpers_test.go:261: (dbg) Run:  kubectl --context ha-328109 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (53.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr: exit status 3 (3.194731751s)

                                                
                                                
-- stdout --
	ha-328109
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-328109-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:50:13.212300 1130125 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:50:13.212841 1130125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:13.212903 1130125 out.go:304] Setting ErrFile to fd 2...
	I0318 12:50:13.212920 1130125 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:13.213396 1130125 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:50:13.213973 1130125 out.go:298] Setting JSON to false
	I0318 12:50:13.214025 1130125 mustload.go:65] Loading cluster: ha-328109
	I0318 12:50:13.214123 1130125 notify.go:220] Checking for updates...
	I0318 12:50:13.214520 1130125 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:50:13.214537 1130125 status.go:255] checking status of ha-328109 ...
	I0318 12:50:13.214918 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:13.214981 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:13.229892 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36919
	I0318 12:50:13.230373 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:13.230958 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:13.230980 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:13.231425 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:13.231640 1130125 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:50:13.233284 1130125 status.go:330] ha-328109 host status = "Running" (err=<nil>)
	I0318 12:50:13.233305 1130125 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:13.233667 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:13.233727 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:13.250667 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45535
	I0318 12:50:13.251199 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:13.251751 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:13.251773 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:13.252116 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:13.252307 1130125 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:50:13.255020 1130125 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:13.255400 1130125 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:13.255424 1130125 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:13.255591 1130125 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:13.255870 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:13.255903 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:13.271135 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33787
	I0318 12:50:13.271540 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:13.272012 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:13.272033 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:13.272365 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:13.272588 1130125 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:50:13.272812 1130125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:13.272840 1130125 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:50:13.275589 1130125 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:13.276031 1130125 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:13.276066 1130125 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:13.276174 1130125 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:50:13.276366 1130125 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:50:13.276530 1130125 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:50:13.276695 1130125 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:50:13.361688 1130125 ssh_runner.go:195] Run: systemctl --version
	I0318 12:50:13.369927 1130125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:13.390792 1130125 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:13.390825 1130125 api_server.go:166] Checking apiserver status ...
	I0318 12:50:13.390870 1130125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:13.410002 1130125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0318 12:50:13.428793 1130125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:13.428874 1130125 ssh_runner.go:195] Run: ls
	I0318 12:50:13.433899 1130125 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:13.438600 1130125 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:13.438623 1130125 status.go:422] ha-328109 apiserver status = Running (err=<nil>)
	I0318 12:50:13.438634 1130125 status.go:257] ha-328109 status: &{Name:ha-328109 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:13.438653 1130125 status.go:255] checking status of ha-328109-m02 ...
	I0318 12:50:13.438940 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:13.438974 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:13.455267 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I0318 12:50:13.455867 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:13.456410 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:13.456435 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:13.456774 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:13.456981 1130125 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:50:13.458718 1130125 status.go:330] ha-328109-m02 host status = "Running" (err=<nil>)
	I0318 12:50:13.458740 1130125 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:50:13.459029 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:13.459064 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:13.474891 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39995
	I0318 12:50:13.475395 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:13.475854 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:13.475887 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:13.476207 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:13.476406 1130125 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:50:13.478944 1130125 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:13.479306 1130125 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:50:13.479337 1130125 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:13.479515 1130125 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:50:13.479825 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:13.479867 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:13.494774 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33805
	I0318 12:50:13.495239 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:13.495741 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:13.495767 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:13.496129 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:13.496347 1130125 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:50:13.496549 1130125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:13.496575 1130125 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:50:13.499320 1130125 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:13.499808 1130125 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:50:13.499854 1130125 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:13.499960 1130125 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:50:13.500147 1130125 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:50:13.500289 1130125 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:50:13.500470 1130125 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	W0318 12:50:15.984717 1130125 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.246:22: connect: no route to host
	W0318 12:50:15.984825 1130125 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	E0318 12:50:15.984849 1130125 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:15.984863 1130125 status.go:257] ha-328109-m02 status: &{Name:ha-328109-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 12:50:15.984891 1130125 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:15.984929 1130125 status.go:255] checking status of ha-328109-m03 ...
	I0318 12:50:15.985385 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:15.985441 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:16.000772 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40663
	I0318 12:50:16.001256 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:16.001845 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:16.001870 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:16.002211 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:16.002427 1130125 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:50:16.003944 1130125 status.go:330] ha-328109-m03 host status = "Running" (err=<nil>)
	I0318 12:50:16.003960 1130125 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:16.004312 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:16.004393 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:16.019527 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33061
	I0318 12:50:16.019954 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:16.020474 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:16.020494 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:16.020809 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:16.020981 1130125 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:50:16.023701 1130125 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:16.024116 1130125 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:16.024139 1130125 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:16.024313 1130125 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:16.024633 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:16.024677 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:16.041054 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41611
	I0318 12:50:16.041531 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:16.042065 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:16.042091 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:16.042450 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:16.042663 1130125 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:50:16.042890 1130125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:16.042918 1130125 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:50:16.045596 1130125 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:16.046076 1130125 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:16.046109 1130125 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:16.046315 1130125 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:50:16.046510 1130125 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:50:16.046673 1130125 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:50:16.046839 1130125 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:50:16.133418 1130125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:16.152022 1130125 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:16.152064 1130125 api_server.go:166] Checking apiserver status ...
	I0318 12:50:16.152117 1130125 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:16.168114 1130125 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	W0318 12:50:16.180075 1130125 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:16.180172 1130125 ssh_runner.go:195] Run: ls
	I0318 12:50:16.185110 1130125 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:16.192039 1130125 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:16.192061 1130125 status.go:422] ha-328109-m03 apiserver status = Running (err=<nil>)
	I0318 12:50:16.192070 1130125 status.go:257] ha-328109-m03 status: &{Name:ha-328109-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:16.192085 1130125 status.go:255] checking status of ha-328109-m04 ...
	I0318 12:50:16.192448 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:16.192498 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:16.207783 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46809
	I0318 12:50:16.208230 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:16.208712 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:16.208735 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:16.209085 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:16.209275 1130125 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:50:16.211030 1130125 status.go:330] ha-328109-m04 host status = "Running" (err=<nil>)
	I0318 12:50:16.211049 1130125 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:16.211393 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:16.211442 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:16.226306 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43995
	I0318 12:50:16.226750 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:16.227200 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:16.227232 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:16.227618 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:16.227771 1130125 main.go:141] libmachine: (ha-328109-m04) Calling .GetIP
	I0318 12:50:16.230772 1130125 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:16.231296 1130125 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:16.231340 1130125 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:16.231489 1130125 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:16.231767 1130125 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:16.231806 1130125 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:16.246619 1130125 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43483
	I0318 12:50:16.247059 1130125 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:16.247612 1130125 main.go:141] libmachine: Using API Version  1
	I0318 12:50:16.247641 1130125 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:16.247962 1130125 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:16.248156 1130125 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:50:16.248367 1130125 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:16.248393 1130125 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:50:16.251265 1130125 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:16.251658 1130125 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:16.251685 1130125 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:16.251834 1130125 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:50:16.252029 1130125 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:50:16.252198 1130125 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:50:16.252316 1130125 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	I0318 12:50:16.333856 1130125 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:16.349683 1130125 status.go:257] ha-328109-m04 status: &{Name:ha-328109-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr: exit status 3 (5.410378099s)

                                                
                                                
-- stdout --
	ha-328109
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-328109-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:50:17.138635 1130221 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:50:17.138809 1130221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:17.138820 1130221 out.go:304] Setting ErrFile to fd 2...
	I0318 12:50:17.138825 1130221 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:17.139021 1130221 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:50:17.139185 1130221 out.go:298] Setting JSON to false
	I0318 12:50:17.139224 1130221 mustload.go:65] Loading cluster: ha-328109
	I0318 12:50:17.139358 1130221 notify.go:220] Checking for updates...
	I0318 12:50:17.139787 1130221 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:50:17.139809 1130221 status.go:255] checking status of ha-328109 ...
	I0318 12:50:17.141034 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:17.141264 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:17.158548 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42327
	I0318 12:50:17.158969 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:17.159611 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:17.159634 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:17.160021 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:17.160273 1130221 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:50:17.161915 1130221 status.go:330] ha-328109 host status = "Running" (err=<nil>)
	I0318 12:50:17.161934 1130221 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:17.162352 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:17.162406 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:17.178769 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42421
	I0318 12:50:17.179149 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:17.179613 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:17.179640 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:17.179987 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:17.180175 1130221 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:50:17.183163 1130221 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:17.183638 1130221 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:17.183669 1130221 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:17.183777 1130221 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:17.184046 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:17.184087 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:17.199721 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43255
	I0318 12:50:17.200243 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:17.200808 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:17.200838 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:17.201230 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:17.201458 1130221 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:50:17.201693 1130221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:17.201725 1130221 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:50:17.204668 1130221 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:17.205214 1130221 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:17.205252 1130221 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:17.205330 1130221 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:50:17.205485 1130221 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:50:17.205614 1130221 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:50:17.205761 1130221 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:50:17.286020 1130221 ssh_runner.go:195] Run: systemctl --version
	I0318 12:50:17.292888 1130221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:17.310334 1130221 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:17.310376 1130221 api_server.go:166] Checking apiserver status ...
	I0318 12:50:17.310417 1130221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:17.327219 1130221 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0318 12:50:17.338523 1130221 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:17.338580 1130221 ssh_runner.go:195] Run: ls
	I0318 12:50:17.343647 1130221 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:17.350639 1130221 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:17.350671 1130221 status.go:422] ha-328109 apiserver status = Running (err=<nil>)
	I0318 12:50:17.350681 1130221 status.go:257] ha-328109 status: &{Name:ha-328109 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:17.350706 1130221 status.go:255] checking status of ha-328109-m02 ...
	I0318 12:50:17.351016 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:17.351044 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:17.366473 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43215
	I0318 12:50:17.366908 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:17.367473 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:17.367497 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:17.367853 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:17.368083 1130221 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:50:17.369592 1130221 status.go:330] ha-328109-m02 host status = "Running" (err=<nil>)
	I0318 12:50:17.369622 1130221 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:50:17.369927 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:17.369971 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:17.386028 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42469
	I0318 12:50:17.386477 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:17.386986 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:17.387008 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:17.387381 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:17.387585 1130221 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:50:17.390519 1130221 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:17.391081 1130221 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:50:17.391112 1130221 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:17.391202 1130221 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:50:17.391512 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:17.391536 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:17.406320 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36101
	I0318 12:50:17.406770 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:17.407359 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:17.407383 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:17.407854 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:17.408196 1130221 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:50:17.408475 1130221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:17.408505 1130221 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:50:17.411527 1130221 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:17.411971 1130221 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:50:17.411998 1130221 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:17.412160 1130221 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:50:17.412314 1130221 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:50:17.412477 1130221 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:50:17.412604 1130221 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	W0318 12:50:19.052624 1130221 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:19.052689 1130221 retry.go:31] will retry after 137.390706ms: dial tcp 192.168.39.246:22: connect: no route to host
	W0318 12:50:22.124665 1130221 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.246:22: connect: no route to host
	W0318 12:50:22.124798 1130221 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	E0318 12:50:22.124821 1130221 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:22.124839 1130221 status.go:257] ha-328109-m02 status: &{Name:ha-328109-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 12:50:22.124879 1130221 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:22.124891 1130221 status.go:255] checking status of ha-328109-m03 ...
	I0318 12:50:22.125212 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:22.125271 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:22.141388 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40475
	I0318 12:50:22.141933 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:22.142482 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:22.142507 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:22.142878 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:22.143045 1130221 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:50:22.144642 1130221 status.go:330] ha-328109-m03 host status = "Running" (err=<nil>)
	I0318 12:50:22.144659 1130221 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:22.144973 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:22.145022 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:22.159830 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33973
	I0318 12:50:22.160227 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:22.160700 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:22.160726 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:22.161084 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:22.161267 1130221 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:50:22.163867 1130221 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:22.164318 1130221 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:22.164355 1130221 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:22.164520 1130221 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:22.164922 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:22.164968 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:22.180030 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40457
	I0318 12:50:22.180542 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:22.181047 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:22.181075 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:22.181436 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:22.181608 1130221 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:50:22.181866 1130221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:22.181896 1130221 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:50:22.184713 1130221 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:22.185177 1130221 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:22.185215 1130221 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:22.185353 1130221 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:50:22.185533 1130221 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:50:22.185683 1130221 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:50:22.185822 1130221 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:50:22.273951 1130221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:22.290573 1130221 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:22.290608 1130221 api_server.go:166] Checking apiserver status ...
	I0318 12:50:22.290651 1130221 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:22.306781 1130221 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	W0318 12:50:22.321334 1130221 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:22.321404 1130221 ssh_runner.go:195] Run: ls
	I0318 12:50:22.326717 1130221 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:22.331668 1130221 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:22.331691 1130221 status.go:422] ha-328109-m03 apiserver status = Running (err=<nil>)
	I0318 12:50:22.331701 1130221 status.go:257] ha-328109-m03 status: &{Name:ha-328109-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:22.331716 1130221 status.go:255] checking status of ha-328109-m04 ...
	I0318 12:50:22.332058 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:22.332086 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:22.348737 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41787
	I0318 12:50:22.349209 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:22.349714 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:22.349739 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:22.350089 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:22.350285 1130221 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:50:22.351776 1130221 status.go:330] ha-328109-m04 host status = "Running" (err=<nil>)
	I0318 12:50:22.351795 1130221 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:22.352192 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:22.352224 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:22.367920 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38337
	I0318 12:50:22.368361 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:22.368811 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:22.368836 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:22.369155 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:22.369352 1130221 main.go:141] libmachine: (ha-328109-m04) Calling .GetIP
	I0318 12:50:22.371977 1130221 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:22.372420 1130221 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:22.372459 1130221 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:22.372634 1130221 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:22.372966 1130221 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:22.373008 1130221 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:22.388153 1130221 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35879
	I0318 12:50:22.388562 1130221 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:22.389015 1130221 main.go:141] libmachine: Using API Version  1
	I0318 12:50:22.389057 1130221 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:22.389370 1130221 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:22.389591 1130221 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:50:22.389807 1130221 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:22.389830 1130221 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:50:22.392290 1130221 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:22.392686 1130221 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:22.392734 1130221 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:22.392847 1130221 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:50:22.393046 1130221 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:50:22.393205 1130221 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:50:22.393344 1130221 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	I0318 12:50:22.473702 1130221 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:22.491719 1130221 status.go:257] ha-328109-m04 status: &{Name:ha-328109-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr: exit status 3 (4.737282065s)

                                                
                                                
-- stdout --
	ha-328109
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-328109-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:50:24.173175 1130316 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:50:24.173302 1130316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:24.173310 1130316 out.go:304] Setting ErrFile to fd 2...
	I0318 12:50:24.173314 1130316 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:24.173488 1130316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:50:24.173648 1130316 out.go:298] Setting JSON to false
	I0318 12:50:24.173680 1130316 mustload.go:65] Loading cluster: ha-328109
	I0318 12:50:24.173741 1130316 notify.go:220] Checking for updates...
	I0318 12:50:24.174103 1130316 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:50:24.174125 1130316 status.go:255] checking status of ha-328109 ...
	I0318 12:50:24.174553 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:24.174616 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:24.192229 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37305
	I0318 12:50:24.192718 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:24.193319 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:24.193370 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:24.193717 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:24.193956 1130316 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:50:24.195684 1130316 status.go:330] ha-328109 host status = "Running" (err=<nil>)
	I0318 12:50:24.195706 1130316 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:24.196044 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:24.196083 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:24.211276 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34687
	I0318 12:50:24.211652 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:24.212118 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:24.212140 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:24.212555 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:24.212772 1130316 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:50:24.215590 1130316 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:24.216016 1130316 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:24.216056 1130316 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:24.216162 1130316 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:24.216588 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:24.216635 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:24.231609 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42783
	I0318 12:50:24.231989 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:24.232464 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:24.232493 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:24.232804 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:24.232990 1130316 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:50:24.233189 1130316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:24.233214 1130316 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:50:24.235769 1130316 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:24.236165 1130316 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:24.236205 1130316 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:24.236372 1130316 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:50:24.236541 1130316 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:50:24.236690 1130316 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:50:24.236824 1130316 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:50:24.325627 1130316 ssh_runner.go:195] Run: systemctl --version
	I0318 12:50:24.334164 1130316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:24.352534 1130316 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:24.352563 1130316 api_server.go:166] Checking apiserver status ...
	I0318 12:50:24.352606 1130316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:24.370092 1130316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0318 12:50:24.381241 1130316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:24.381301 1130316 ssh_runner.go:195] Run: ls
	I0318 12:50:24.386078 1130316 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:24.391212 1130316 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:24.391235 1130316 status.go:422] ha-328109 apiserver status = Running (err=<nil>)
	I0318 12:50:24.391244 1130316 status.go:257] ha-328109 status: &{Name:ha-328109 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:24.391268 1130316 status.go:255] checking status of ha-328109-m02 ...
	I0318 12:50:24.391629 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:24.391678 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:24.406992 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0318 12:50:24.407477 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:24.407949 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:24.407974 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:24.408301 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:24.408603 1130316 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:50:24.411633 1130316 status.go:330] ha-328109-m02 host status = "Running" (err=<nil>)
	I0318 12:50:24.411650 1130316 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:50:24.411931 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:24.411968 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:24.426711 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33959
	I0318 12:50:24.427168 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:24.427612 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:24.427637 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:24.427994 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:24.428214 1130316 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:50:24.431417 1130316 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:24.431847 1130316 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:50:24.431876 1130316 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:24.432044 1130316 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:50:24.432402 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:24.432442 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:24.448383 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37979
	I0318 12:50:24.448820 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:24.449254 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:24.449278 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:24.449619 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:24.449816 1130316 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:50:24.450023 1130316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:24.450047 1130316 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:50:24.452601 1130316 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:24.453064 1130316 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:50:24.453098 1130316 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:24.453232 1130316 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:50:24.453413 1130316 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:50:24.453556 1130316 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:50:24.453711 1130316 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	W0318 12:50:25.196515 1130316 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:25.196592 1130316 retry.go:31] will retry after 216.353961ms: dial tcp 192.168.39.246:22: connect: no route to host
	W0318 12:50:28.496594 1130316 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.246:22: connect: no route to host
	W0318 12:50:28.496744 1130316 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	E0318 12:50:28.496774 1130316 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:28.496788 1130316 status.go:257] ha-328109-m02 status: &{Name:ha-328109-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 12:50:28.496820 1130316 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:28.496830 1130316 status.go:255] checking status of ha-328109-m03 ...
	I0318 12:50:28.497144 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:28.497191 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:28.512303 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0318 12:50:28.512809 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:28.513289 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:28.513311 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:28.513668 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:28.513870 1130316 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:50:28.515445 1130316 status.go:330] ha-328109-m03 host status = "Running" (err=<nil>)
	I0318 12:50:28.515468 1130316 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:28.515807 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:28.515870 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:28.530914 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36143
	I0318 12:50:28.531383 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:28.531881 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:28.531897 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:28.532198 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:28.532400 1130316 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:50:28.535212 1130316 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:28.535632 1130316 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:28.535671 1130316 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:28.535817 1130316 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:28.536198 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:28.536246 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:28.552072 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41487
	I0318 12:50:28.552542 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:28.553030 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:28.553054 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:28.553381 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:28.553563 1130316 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:50:28.553786 1130316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:28.553811 1130316 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:50:28.556552 1130316 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:28.557034 1130316 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:28.557070 1130316 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:28.557190 1130316 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:50:28.557355 1130316 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:50:28.557533 1130316 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:50:28.557681 1130316 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:50:28.641074 1130316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:28.659218 1130316 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:28.659247 1130316 api_server.go:166] Checking apiserver status ...
	I0318 12:50:28.659294 1130316 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:28.674886 1130316 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	W0318 12:50:28.684977 1130316 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:28.685039 1130316 ssh_runner.go:195] Run: ls
	I0318 12:50:28.690179 1130316 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:28.695077 1130316 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:28.695108 1130316 status.go:422] ha-328109-m03 apiserver status = Running (err=<nil>)
	I0318 12:50:28.695121 1130316 status.go:257] ha-328109-m03 status: &{Name:ha-328109-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:28.695149 1130316 status.go:255] checking status of ha-328109-m04 ...
	I0318 12:50:28.695449 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:28.695496 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:28.711379 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40913
	I0318 12:50:28.711821 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:28.712345 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:28.712373 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:28.712738 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:28.712920 1130316 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:50:28.714434 1130316 status.go:330] ha-328109-m04 host status = "Running" (err=<nil>)
	I0318 12:50:28.714453 1130316 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:28.714756 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:28.714790 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:28.729685 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37341
	I0318 12:50:28.730138 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:28.730600 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:28.730635 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:28.730936 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:28.731124 1130316 main.go:141] libmachine: (ha-328109-m04) Calling .GetIP
	I0318 12:50:28.733878 1130316 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:28.734313 1130316 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:28.734347 1130316 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:28.734471 1130316 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:28.734758 1130316 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:28.734794 1130316 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:28.749457 1130316 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38205
	I0318 12:50:28.749883 1130316 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:28.750376 1130316 main.go:141] libmachine: Using API Version  1
	I0318 12:50:28.750397 1130316 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:28.750678 1130316 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:28.750882 1130316 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:50:28.751066 1130316 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:28.751089 1130316 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:50:28.753643 1130316 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:28.753979 1130316 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:28.754011 1130316 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:28.754125 1130316 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:50:28.754301 1130316 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:50:28.754502 1130316 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:50:28.754620 1130316 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	I0318 12:50:28.832415 1130316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:28.850397 1130316 status.go:257] ha-328109-m04 status: &{Name:ha-328109-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr: exit status 3 (3.757105683s)

                                                
                                                
-- stdout --
	ha-328109
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-328109-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:50:31.405715 1130422 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:50:31.405865 1130422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:31.405876 1130422 out.go:304] Setting ErrFile to fd 2...
	I0318 12:50:31.405882 1130422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:31.406537 1130422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:50:31.406902 1130422 out.go:298] Setting JSON to false
	I0318 12:50:31.406983 1130422 mustload.go:65] Loading cluster: ha-328109
	I0318 12:50:31.407199 1130422 notify.go:220] Checking for updates...
	I0318 12:50:31.407824 1130422 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:50:31.407850 1130422 status.go:255] checking status of ha-328109 ...
	I0318 12:50:31.408303 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:31.408374 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:31.424654 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38609
	I0318 12:50:31.425182 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:31.425866 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:31.425900 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:31.426278 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:31.426593 1130422 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:50:31.428672 1130422 status.go:330] ha-328109 host status = "Running" (err=<nil>)
	I0318 12:50:31.428694 1130422 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:31.429135 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:31.429191 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:31.445356 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39071
	I0318 12:50:31.445830 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:31.446324 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:31.446357 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:31.446717 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:31.446912 1130422 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:50:31.449868 1130422 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:31.450330 1130422 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:31.450356 1130422 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:31.450486 1130422 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:31.450775 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:31.450811 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:31.466485 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33111
	I0318 12:50:31.467019 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:31.467514 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:31.467536 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:31.467865 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:31.468066 1130422 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:50:31.468310 1130422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:31.468362 1130422 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:50:31.471382 1130422 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:31.471845 1130422 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:31.471873 1130422 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:31.472024 1130422 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:50:31.472218 1130422 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:50:31.472393 1130422 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:50:31.472569 1130422 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:50:31.552863 1130422 ssh_runner.go:195] Run: systemctl --version
	I0318 12:50:31.559966 1130422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:31.577576 1130422 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:31.577605 1130422 api_server.go:166] Checking apiserver status ...
	I0318 12:50:31.577651 1130422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:31.595891 1130422 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0318 12:50:31.609093 1130422 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:31.609155 1130422 ssh_runner.go:195] Run: ls
	I0318 12:50:31.614417 1130422 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:31.618777 1130422 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:31.618798 1130422 status.go:422] ha-328109 apiserver status = Running (err=<nil>)
	I0318 12:50:31.618808 1130422 status.go:257] ha-328109 status: &{Name:ha-328109 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:31.618827 1130422 status.go:255] checking status of ha-328109-m02 ...
	I0318 12:50:31.619101 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:31.619134 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:31.635360 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39461
	I0318 12:50:31.635901 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:31.636410 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:31.636438 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:31.636792 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:31.637011 1130422 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:50:31.638626 1130422 status.go:330] ha-328109-m02 host status = "Running" (err=<nil>)
	I0318 12:50:31.638641 1130422 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:50:31.638960 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:31.639000 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:31.654777 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45225
	I0318 12:50:31.655126 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:31.655578 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:31.655600 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:31.655902 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:31.656080 1130422 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:50:31.658638 1130422 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:31.659067 1130422 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:50:31.659097 1130422 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:31.659244 1130422 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:50:31.659548 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:31.659581 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:31.675543 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33435
	I0318 12:50:31.675969 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:31.676431 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:31.676450 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:31.676727 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:31.676907 1130422 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:50:31.677079 1130422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:31.677099 1130422 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:50:31.679772 1130422 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:31.680270 1130422 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:50:31.680298 1130422 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:31.680462 1130422 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:50:31.680624 1130422 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:50:31.680797 1130422 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:50:31.680959 1130422 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	W0318 12:50:34.732644 1130422 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.246:22: connect: no route to host
	W0318 12:50:34.732749 1130422 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	E0318 12:50:34.732766 1130422 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:34.732775 1130422 status.go:257] ha-328109-m02 status: &{Name:ha-328109-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 12:50:34.732814 1130422 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:34.732840 1130422 status.go:255] checking status of ha-328109-m03 ...
	I0318 12:50:34.733211 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:34.733278 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:34.750089 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0318 12:50:34.750521 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:34.751071 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:34.751101 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:34.751444 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:34.751667 1130422 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:50:34.753260 1130422 status.go:330] ha-328109-m03 host status = "Running" (err=<nil>)
	I0318 12:50:34.753280 1130422 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:34.753572 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:34.753623 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:34.769180 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35677
	I0318 12:50:34.769646 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:34.770164 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:34.770201 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:34.770591 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:34.770775 1130422 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:50:34.773534 1130422 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:34.774022 1130422 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:34.774062 1130422 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:34.774222 1130422 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:34.774536 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:34.774587 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:34.790109 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44485
	I0318 12:50:34.790463 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:34.790878 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:34.790897 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:34.791212 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:34.791373 1130422 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:50:34.791513 1130422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:34.791528 1130422 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:50:34.793910 1130422 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:34.794263 1130422 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:34.794297 1130422 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:34.794424 1130422 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:50:34.794604 1130422 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:50:34.794764 1130422 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:50:34.794890 1130422 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:50:34.881728 1130422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:34.898265 1130422 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:34.898298 1130422 api_server.go:166] Checking apiserver status ...
	I0318 12:50:34.898346 1130422 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:34.920739 1130422 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	W0318 12:50:34.938509 1130422 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:34.938578 1130422 ssh_runner.go:195] Run: ls
	I0318 12:50:34.943660 1130422 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:34.948659 1130422 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:34.948682 1130422 status.go:422] ha-328109-m03 apiserver status = Running (err=<nil>)
	I0318 12:50:34.948690 1130422 status.go:257] ha-328109-m03 status: &{Name:ha-328109-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:34.948705 1130422 status.go:255] checking status of ha-328109-m04 ...
	I0318 12:50:34.948997 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:34.949031 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:34.964011 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I0318 12:50:34.964517 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:34.965024 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:34.965047 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:34.965389 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:34.965609 1130422 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:50:34.967382 1130422 status.go:330] ha-328109-m04 host status = "Running" (err=<nil>)
	I0318 12:50:34.967403 1130422 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:34.967691 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:34.967735 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:34.982755 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39347
	I0318 12:50:34.983195 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:34.983661 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:34.983689 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:34.984078 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:34.984287 1130422 main.go:141] libmachine: (ha-328109-m04) Calling .GetIP
	I0318 12:50:34.987272 1130422 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:34.987719 1130422 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:34.987753 1130422 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:34.987901 1130422 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:34.988181 1130422 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:34.988217 1130422 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:35.002573 1130422 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34867
	I0318 12:50:35.002957 1130422 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:35.003425 1130422 main.go:141] libmachine: Using API Version  1
	I0318 12:50:35.003453 1130422 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:35.003759 1130422 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:35.003953 1130422 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:50:35.004150 1130422 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:35.004183 1130422 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:50:35.006849 1130422 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:35.007246 1130422 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:35.007277 1130422 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:35.007412 1130422 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:50:35.007603 1130422 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:50:35.007768 1130422 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:50:35.007882 1130422 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	I0318 12:50:35.089324 1130422 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:35.105957 1130422 status.go:257] ha-328109-m04 status: &{Name:ha-328109-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr: exit status 3 (3.781340205s)

                                                
                                                
-- stdout --
	ha-328109
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-328109-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:50:39.880186 1130529 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:50:39.880314 1130529 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:39.880344 1130529 out.go:304] Setting ErrFile to fd 2...
	I0318 12:50:39.880354 1130529 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:39.880554 1130529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:50:39.880725 1130529 out.go:298] Setting JSON to false
	I0318 12:50:39.880767 1130529 mustload.go:65] Loading cluster: ha-328109
	I0318 12:50:39.880880 1130529 notify.go:220] Checking for updates...
	I0318 12:50:39.881149 1130529 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:50:39.881166 1130529 status.go:255] checking status of ha-328109 ...
	I0318 12:50:39.881520 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:39.881580 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:39.904831 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37673
	I0318 12:50:39.905313 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:39.905913 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:39.905937 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:39.906283 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:39.906505 1130529 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:50:39.908207 1130529 status.go:330] ha-328109 host status = "Running" (err=<nil>)
	I0318 12:50:39.908230 1130529 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:39.908586 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:39.908653 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:39.923610 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40685
	I0318 12:50:39.924035 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:39.924524 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:39.924561 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:39.924924 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:39.925118 1130529 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:50:39.927931 1130529 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:39.928364 1130529 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:39.928398 1130529 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:39.928533 1130529 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:39.928811 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:39.928857 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:39.944632 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40439
	I0318 12:50:39.945020 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:39.945463 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:39.945478 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:39.945782 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:39.945965 1130529 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:50:39.946145 1130529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:39.946170 1130529 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:50:39.948661 1130529 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:39.949042 1130529 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:39.949080 1130529 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:39.949206 1130529 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:50:39.949396 1130529 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:50:39.949549 1130529 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:50:39.949684 1130529 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:50:40.032024 1130529 ssh_runner.go:195] Run: systemctl --version
	I0318 12:50:40.041141 1130529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:40.060014 1130529 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:40.060047 1130529 api_server.go:166] Checking apiserver status ...
	I0318 12:50:40.060089 1130529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:40.076594 1130529 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0318 12:50:40.091135 1130529 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:40.091218 1130529 ssh_runner.go:195] Run: ls
	I0318 12:50:40.096558 1130529 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:40.103525 1130529 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:40.103553 1130529 status.go:422] ha-328109 apiserver status = Running (err=<nil>)
	I0318 12:50:40.103563 1130529 status.go:257] ha-328109 status: &{Name:ha-328109 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:40.103584 1130529 status.go:255] checking status of ha-328109-m02 ...
	I0318 12:50:40.103913 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:40.103954 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:40.121177 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43927
	I0318 12:50:40.121636 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:40.122185 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:40.122210 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:40.122582 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:40.122798 1130529 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:50:40.124493 1130529 status.go:330] ha-328109-m02 host status = "Running" (err=<nil>)
	I0318 12:50:40.124517 1130529 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:50:40.124822 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:40.124863 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:40.140632 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46093
	I0318 12:50:40.141077 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:40.141639 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:40.141666 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:40.142029 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:40.142245 1130529 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:50:40.145163 1130529 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:40.145582 1130529 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:50:40.145607 1130529 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:40.145762 1130529 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:50:40.146075 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:40.146110 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:40.162031 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33091
	I0318 12:50:40.162455 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:40.162892 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:40.162913 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:40.163210 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:40.163410 1130529 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:50:40.163615 1130529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:40.163641 1130529 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:50:40.166282 1130529 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:40.166772 1130529 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:50:40.166791 1130529 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:50:40.166954 1130529 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:50:40.167106 1130529 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:50:40.167288 1130529 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:50:40.167441 1130529 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	W0318 12:50:43.244639 1130529 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.246:22: connect: no route to host
	W0318 12:50:43.244768 1130529 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	E0318 12:50:43.244793 1130529 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:43.244806 1130529 status.go:257] ha-328109-m02 status: &{Name:ha-328109-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0318 12:50:43.244831 1130529 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.246:22: connect: no route to host
	I0318 12:50:43.244858 1130529 status.go:255] checking status of ha-328109-m03 ...
	I0318 12:50:43.245330 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:43.245392 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:43.260986 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43427
	I0318 12:50:43.261552 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:43.262202 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:43.262232 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:43.262565 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:43.262762 1130529 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:50:43.264268 1130529 status.go:330] ha-328109-m03 host status = "Running" (err=<nil>)
	I0318 12:50:43.264287 1130529 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:43.264618 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:43.264673 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:43.279590 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43683
	I0318 12:50:43.280004 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:43.280454 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:43.280478 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:43.280832 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:43.281033 1130529 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:50:43.283689 1130529 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:43.284077 1130529 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:43.284113 1130529 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:43.284229 1130529 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:43.284579 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:43.284628 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:43.299079 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37237
	I0318 12:50:43.299492 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:43.299970 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:43.299997 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:43.300424 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:43.300603 1130529 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:50:43.300844 1130529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:43.300869 1130529 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:50:43.303570 1130529 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:43.304018 1130529 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:43.304045 1130529 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:43.304176 1130529 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:50:43.304371 1130529 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:50:43.304531 1130529 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:50:43.304678 1130529 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:50:43.389049 1130529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:43.406334 1130529 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:43.406372 1130529 api_server.go:166] Checking apiserver status ...
	I0318 12:50:43.406407 1130529 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:43.422731 1130529 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	W0318 12:50:43.434922 1130529 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:43.434991 1130529 ssh_runner.go:195] Run: ls
	I0318 12:50:43.440154 1130529 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:43.445082 1130529 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:43.445110 1130529 status.go:422] ha-328109-m03 apiserver status = Running (err=<nil>)
	I0318 12:50:43.445122 1130529 status.go:257] ha-328109-m03 status: &{Name:ha-328109-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:43.445143 1130529 status.go:255] checking status of ha-328109-m04 ...
	I0318 12:50:43.445499 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:43.445542 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:43.460909 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
	I0318 12:50:43.461320 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:43.461874 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:43.461903 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:43.462258 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:43.462474 1130529 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:50:43.464139 1130529 status.go:330] ha-328109-m04 host status = "Running" (err=<nil>)
	I0318 12:50:43.464172 1130529 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:43.464507 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:43.464540 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:43.480314 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40983
	I0318 12:50:43.480794 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:43.481294 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:43.481321 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:43.481685 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:43.481932 1130529 main.go:141] libmachine: (ha-328109-m04) Calling .GetIP
	I0318 12:50:43.484497 1130529 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:43.484932 1130529 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:43.484976 1130529 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:43.485088 1130529 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:43.485483 1130529 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:43.485532 1130529 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:43.499806 1130529 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43873
	I0318 12:50:43.500173 1130529 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:43.500638 1130529 main.go:141] libmachine: Using API Version  1
	I0318 12:50:43.500669 1130529 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:43.501008 1130529 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:43.501209 1130529 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:50:43.501391 1130529 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:43.501413 1130529 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:50:43.504287 1130529 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:43.504734 1130529 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:43.504759 1130529 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:43.504937 1130529 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:50:43.505089 1130529 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:50:43.505168 1130529 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:50:43.505259 1130529 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	I0318 12:50:43.585073 1130529 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:43.601448 1130529 status.go:257] ha-328109-m04 status: &{Name:ha-328109-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr: exit status 7 (674.872521ms)

                                                
                                                
-- stdout --
	ha-328109
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-328109-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:50:49.857798 1130649 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:50:49.858083 1130649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:49.858096 1130649 out.go:304] Setting ErrFile to fd 2...
	I0318 12:50:49.858101 1130649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:49.858303 1130649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:50:49.858486 1130649 out.go:298] Setting JSON to false
	I0318 12:50:49.858527 1130649 mustload.go:65] Loading cluster: ha-328109
	I0318 12:50:49.858634 1130649 notify.go:220] Checking for updates...
	I0318 12:50:49.859505 1130649 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:50:49.859571 1130649 status.go:255] checking status of ha-328109 ...
	I0318 12:50:49.860894 1130649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:49.860942 1130649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:49.877910 1130649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40027
	I0318 12:50:49.878442 1130649 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:49.878961 1130649 main.go:141] libmachine: Using API Version  1
	I0318 12:50:49.879010 1130649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:49.879457 1130649 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:49.879713 1130649 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:50:49.885849 1130649 status.go:330] ha-328109 host status = "Running" (err=<nil>)
	I0318 12:50:49.885875 1130649 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:49.886281 1130649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:49.886323 1130649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:49.901820 1130649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0318 12:50:49.902288 1130649 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:49.902787 1130649 main.go:141] libmachine: Using API Version  1
	I0318 12:50:49.902812 1130649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:49.903143 1130649 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:49.903353 1130649 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:50:49.906327 1130649 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:49.906940 1130649 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:49.906986 1130649 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:49.907109 1130649 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:49.907416 1130649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:49.907460 1130649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:49.923294 1130649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37687
	I0318 12:50:49.923787 1130649 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:49.924403 1130649 main.go:141] libmachine: Using API Version  1
	I0318 12:50:49.924436 1130649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:49.924755 1130649 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:49.925000 1130649 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:50:49.925242 1130649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:49.925273 1130649 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:50:49.928539 1130649 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:49.929073 1130649 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:49.929108 1130649 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:49.929207 1130649 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:50:49.929402 1130649 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:50:49.929590 1130649 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:50:49.929750 1130649 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:50:50.015436 1130649 ssh_runner.go:195] Run: systemctl --version
	I0318 12:50:50.024016 1130649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:50.042187 1130649 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:50.042222 1130649 api_server.go:166] Checking apiserver status ...
	I0318 12:50:50.042281 1130649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:50.059683 1130649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0318 12:50:50.070250 1130649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:50.070330 1130649 ssh_runner.go:195] Run: ls
	I0318 12:50:50.076123 1130649 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:50.081075 1130649 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:50.081104 1130649 status.go:422] ha-328109 apiserver status = Running (err=<nil>)
	I0318 12:50:50.081118 1130649 status.go:257] ha-328109 status: &{Name:ha-328109 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:50.081140 1130649 status.go:255] checking status of ha-328109-m02 ...
	I0318 12:50:50.081507 1130649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:50.081561 1130649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:50.096680 1130649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I0318 12:50:50.097156 1130649 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:50.097651 1130649 main.go:141] libmachine: Using API Version  1
	I0318 12:50:50.097677 1130649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:50.098005 1130649 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:50.098188 1130649 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:50:50.100130 1130649 status.go:330] ha-328109-m02 host status = "Stopped" (err=<nil>)
	I0318 12:50:50.100149 1130649 status.go:343] host is not running, skipping remaining checks
	I0318 12:50:50.100158 1130649 status.go:257] ha-328109-m02 status: &{Name:ha-328109-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:50.100177 1130649 status.go:255] checking status of ha-328109-m03 ...
	I0318 12:50:50.100499 1130649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:50.100544 1130649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:50.116916 1130649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39821
	I0318 12:50:50.117387 1130649 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:50.117864 1130649 main.go:141] libmachine: Using API Version  1
	I0318 12:50:50.117889 1130649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:50.118200 1130649 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:50.118404 1130649 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:50:50.120131 1130649 status.go:330] ha-328109-m03 host status = "Running" (err=<nil>)
	I0318 12:50:50.120148 1130649 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:50.120484 1130649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:50.120526 1130649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:50.135579 1130649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0318 12:50:50.136068 1130649 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:50.136550 1130649 main.go:141] libmachine: Using API Version  1
	I0318 12:50:50.136572 1130649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:50.136921 1130649 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:50.137081 1130649 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:50:50.139497 1130649 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:50.139995 1130649 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:50.140015 1130649 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:50.140213 1130649 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:50.140664 1130649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:50.140722 1130649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:50.155546 1130649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43547
	I0318 12:50:50.156101 1130649 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:50.156679 1130649 main.go:141] libmachine: Using API Version  1
	I0318 12:50:50.156706 1130649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:50.157109 1130649 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:50.157343 1130649 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:50:50.157582 1130649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:50.157610 1130649 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:50:50.160539 1130649 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:50.161029 1130649 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:50.161058 1130649 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:50.161242 1130649 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:50:50.161422 1130649 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:50:50.161573 1130649 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:50:50.161715 1130649 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:50:50.249534 1130649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:50.267677 1130649 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:50.267705 1130649 api_server.go:166] Checking apiserver status ...
	I0318 12:50:50.267737 1130649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:50.287358 1130649 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	W0318 12:50:50.299680 1130649 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:50.299730 1130649 ssh_runner.go:195] Run: ls
	I0318 12:50:50.306451 1130649 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:50.313642 1130649 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:50.313681 1130649 status.go:422] ha-328109-m03 apiserver status = Running (err=<nil>)
	I0318 12:50:50.313695 1130649 status.go:257] ha-328109-m03 status: &{Name:ha-328109-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:50.313721 1130649 status.go:255] checking status of ha-328109-m04 ...
	I0318 12:50:50.314073 1130649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:50.314112 1130649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:50.329191 1130649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44665
	I0318 12:50:50.329653 1130649 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:50.330125 1130649 main.go:141] libmachine: Using API Version  1
	I0318 12:50:50.330152 1130649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:50.330432 1130649 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:50.330657 1130649 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:50:50.332275 1130649 status.go:330] ha-328109-m04 host status = "Running" (err=<nil>)
	I0318 12:50:50.332295 1130649 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:50.332613 1130649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:50.332661 1130649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:50.347333 1130649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42773
	I0318 12:50:50.347774 1130649 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:50.348286 1130649 main.go:141] libmachine: Using API Version  1
	I0318 12:50:50.348311 1130649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:50.348666 1130649 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:50.348894 1130649 main.go:141] libmachine: (ha-328109-m04) Calling .GetIP
	I0318 12:50:50.351812 1130649 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:50.352231 1130649 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:50.352255 1130649 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:50.352467 1130649 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:50.352838 1130649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:50.352877 1130649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:50.367327 1130649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37707
	I0318 12:50:50.367760 1130649 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:50.368196 1130649 main.go:141] libmachine: Using API Version  1
	I0318 12:50:50.368215 1130649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:50.368585 1130649 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:50.368771 1130649 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:50:50.368973 1130649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:50.368995 1130649 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:50:50.371760 1130649 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:50.372190 1130649 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:50.372241 1130649 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:50.372422 1130649 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:50:50.372613 1130649 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:50:50.372779 1130649 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:50:50.372922 1130649 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	I0318 12:50:50.453073 1130649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:50.469933 1130649 status.go:257] ha-328109-m04 status: &{Name:ha-328109-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr: exit status 7 (685.948084ms)

                                                
                                                
-- stdout --
	ha-328109
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-328109-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:50:56.145312 1130737 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:50:56.145609 1130737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:56.145621 1130737 out.go:304] Setting ErrFile to fd 2...
	I0318 12:50:56.145625 1130737 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:50:56.145847 1130737 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:50:56.146043 1130737 out.go:298] Setting JSON to false
	I0318 12:50:56.146089 1130737 mustload.go:65] Loading cluster: ha-328109
	I0318 12:50:56.146221 1130737 notify.go:220] Checking for updates...
	I0318 12:50:56.146642 1130737 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:50:56.146666 1130737 status.go:255] checking status of ha-328109 ...
	I0318 12:50:56.147184 1130737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:56.147243 1130737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:56.165702 1130737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42607
	I0318 12:50:56.166190 1130737 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:56.166923 1130737 main.go:141] libmachine: Using API Version  1
	I0318 12:50:56.166954 1130737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:56.167365 1130737 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:56.167595 1130737 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:50:56.169575 1130737 status.go:330] ha-328109 host status = "Running" (err=<nil>)
	I0318 12:50:56.169595 1130737 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:56.169947 1130737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:56.169988 1130737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:56.185238 1130737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37703
	I0318 12:50:56.185716 1130737 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:56.186178 1130737 main.go:141] libmachine: Using API Version  1
	I0318 12:50:56.186200 1130737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:56.186495 1130737 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:56.186715 1130737 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:50:56.189467 1130737 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:56.189877 1130737 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:56.189914 1130737 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:56.190041 1130737 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:50:56.190369 1130737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:56.190416 1130737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:56.206986 1130737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36167
	I0318 12:50:56.207422 1130737 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:56.207846 1130737 main.go:141] libmachine: Using API Version  1
	I0318 12:50:56.207872 1130737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:56.208173 1130737 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:56.208387 1130737 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:50:56.208614 1130737 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:56.208651 1130737 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:50:56.211681 1130737 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:56.212103 1130737 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:50:56.212133 1130737 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:50:56.212337 1130737 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:50:56.212505 1130737 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:50:56.212667 1130737 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:50:56.212823 1130737 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:50:56.297985 1130737 ssh_runner.go:195] Run: systemctl --version
	I0318 12:50:56.305934 1130737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:56.339764 1130737 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:56.339801 1130737 api_server.go:166] Checking apiserver status ...
	I0318 12:50:56.339854 1130737 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:56.357677 1130737 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0318 12:50:56.369346 1130737 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:56.369429 1130737 ssh_runner.go:195] Run: ls
	I0318 12:50:56.376105 1130737 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:56.383132 1130737 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:56.383161 1130737 status.go:422] ha-328109 apiserver status = Running (err=<nil>)
	I0318 12:50:56.383172 1130737 status.go:257] ha-328109 status: &{Name:ha-328109 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:56.383192 1130737 status.go:255] checking status of ha-328109-m02 ...
	I0318 12:50:56.383488 1130737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:56.383537 1130737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:56.400602 1130737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I0318 12:50:56.401103 1130737 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:56.401608 1130737 main.go:141] libmachine: Using API Version  1
	I0318 12:50:56.401634 1130737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:56.402039 1130737 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:56.402273 1130737 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:50:56.403925 1130737 status.go:330] ha-328109-m02 host status = "Stopped" (err=<nil>)
	I0318 12:50:56.403940 1130737 status.go:343] host is not running, skipping remaining checks
	I0318 12:50:56.403947 1130737 status.go:257] ha-328109-m02 status: &{Name:ha-328109-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:56.403973 1130737 status.go:255] checking status of ha-328109-m03 ...
	I0318 12:50:56.404390 1130737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:56.404437 1130737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:56.420119 1130737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45477
	I0318 12:50:56.420555 1130737 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:56.421048 1130737 main.go:141] libmachine: Using API Version  1
	I0318 12:50:56.421074 1130737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:56.421438 1130737 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:56.421681 1130737 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:50:56.423562 1130737 status.go:330] ha-328109-m03 host status = "Running" (err=<nil>)
	I0318 12:50:56.423586 1130737 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:56.424012 1130737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:56.424067 1130737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:56.439387 1130737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43279
	I0318 12:50:56.439873 1130737 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:56.440373 1130737 main.go:141] libmachine: Using API Version  1
	I0318 12:50:56.440400 1130737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:56.440703 1130737 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:56.440915 1130737 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:50:56.443512 1130737 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:56.443957 1130737 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:56.443998 1130737 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:56.444115 1130737 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:50:56.444465 1130737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:56.444516 1130737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:56.459604 1130737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38769
	I0318 12:50:56.460168 1130737 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:56.460705 1130737 main.go:141] libmachine: Using API Version  1
	I0318 12:50:56.460732 1130737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:56.461137 1130737 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:56.461356 1130737 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:50:56.461581 1130737 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:56.461609 1130737 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:50:56.464522 1130737 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:56.464971 1130737 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:50:56.465002 1130737 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:50:56.465170 1130737 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:50:56.465350 1130737 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:50:56.465501 1130737 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:50:56.465650 1130737 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:50:56.554411 1130737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:56.573242 1130737 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:50:56.573272 1130737 api_server.go:166] Checking apiserver status ...
	I0318 12:50:56.573306 1130737 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:50:56.588693 1130737 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	W0318 12:50:56.598830 1130737 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:50:56.598896 1130737 ssh_runner.go:195] Run: ls
	I0318 12:50:56.603989 1130737 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:50:56.608982 1130737 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:50:56.609005 1130737 status.go:422] ha-328109-m03 apiserver status = Running (err=<nil>)
	I0318 12:50:56.609013 1130737 status.go:257] ha-328109-m03 status: &{Name:ha-328109-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:50:56.609028 1130737 status.go:255] checking status of ha-328109-m04 ...
	I0318 12:50:56.609328 1130737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:56.609366 1130737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:56.624588 1130737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I0318 12:50:56.625046 1130737 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:56.625494 1130737 main.go:141] libmachine: Using API Version  1
	I0318 12:50:56.625516 1130737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:56.625821 1130737 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:56.626027 1130737 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:50:56.627610 1130737 status.go:330] ha-328109-m04 host status = "Running" (err=<nil>)
	I0318 12:50:56.627638 1130737 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:56.628019 1130737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:56.628074 1130737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:56.642852 1130737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41951
	I0318 12:50:56.643341 1130737 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:56.643840 1130737 main.go:141] libmachine: Using API Version  1
	I0318 12:50:56.643861 1130737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:56.644178 1130737 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:56.644392 1130737 main.go:141] libmachine: (ha-328109-m04) Calling .GetIP
	I0318 12:50:56.647126 1130737 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:56.647515 1130737 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:56.647550 1130737 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:56.647700 1130737 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:50:56.648106 1130737 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:50:56.648153 1130737 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:50:56.663692 1130737 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35683
	I0318 12:50:56.664165 1130737 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:50:56.664720 1130737 main.go:141] libmachine: Using API Version  1
	I0318 12:50:56.664746 1130737 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:50:56.665102 1130737 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:50:56.665325 1130737 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:50:56.665543 1130737 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:50:56.665567 1130737 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:50:56.668637 1130737 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:56.668999 1130737 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:50:56.669025 1130737 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:50:56.669181 1130737 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:50:56.669376 1130737 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:50:56.669545 1130737 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:50:56.669699 1130737 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	I0318 12:50:56.748427 1130737 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:50:56.768787 1130737 status.go:257] ha-328109-m04 status: &{Name:ha-328109-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr: exit status 7 (668.779046ms)

                                                
                                                
-- stdout --
	ha-328109
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-328109-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:51:03.031741 1130820 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:51:03.032063 1130820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:51:03.032077 1130820 out.go:304] Setting ErrFile to fd 2...
	I0318 12:51:03.032083 1130820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:51:03.032322 1130820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:51:03.032538 1130820 out.go:298] Setting JSON to false
	I0318 12:51:03.032577 1130820 mustload.go:65] Loading cluster: ha-328109
	I0318 12:51:03.032625 1130820 notify.go:220] Checking for updates...
	I0318 12:51:03.032999 1130820 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:51:03.033025 1130820 status.go:255] checking status of ha-328109 ...
	I0318 12:51:03.033538 1130820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:03.033602 1130820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:03.051443 1130820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42465
	I0318 12:51:03.051780 1130820 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:03.052536 1130820 main.go:141] libmachine: Using API Version  1
	I0318 12:51:03.052567 1130820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:03.052887 1130820 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:03.053073 1130820 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:51:03.054530 1130820 status.go:330] ha-328109 host status = "Running" (err=<nil>)
	I0318 12:51:03.054550 1130820 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:51:03.054839 1130820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:03.054875 1130820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:03.069624 1130820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41725
	I0318 12:51:03.070035 1130820 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:03.071201 1130820 main.go:141] libmachine: Using API Version  1
	I0318 12:51:03.071225 1130820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:03.071572 1130820 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:03.071787 1130820 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:51:03.074359 1130820 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:51:03.074744 1130820 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:51:03.074764 1130820 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:51:03.074878 1130820 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:51:03.075173 1130820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:03.075210 1130820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:03.090536 1130820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46081
	I0318 12:51:03.090913 1130820 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:03.091447 1130820 main.go:141] libmachine: Using API Version  1
	I0318 12:51:03.091469 1130820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:03.091779 1130820 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:03.091954 1130820 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:51:03.092144 1130820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:51:03.092185 1130820 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:51:03.094527 1130820 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:51:03.094966 1130820 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:51:03.094999 1130820 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:51:03.095132 1130820 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:51:03.095300 1130820 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:51:03.095458 1130820 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:51:03.095589 1130820 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:51:03.181066 1130820 ssh_runner.go:195] Run: systemctl --version
	I0318 12:51:03.188839 1130820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:51:03.211533 1130820 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:51:03.211562 1130820 api_server.go:166] Checking apiserver status ...
	I0318 12:51:03.211602 1130820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:51:03.226917 1130820 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup
	W0318 12:51:03.239372 1130820 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1160/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:51:03.239417 1130820 ssh_runner.go:195] Run: ls
	I0318 12:51:03.244828 1130820 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:51:03.251594 1130820 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:51:03.251622 1130820 status.go:422] ha-328109 apiserver status = Running (err=<nil>)
	I0318 12:51:03.251636 1130820 status.go:257] ha-328109 status: &{Name:ha-328109 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:51:03.251653 1130820 status.go:255] checking status of ha-328109-m02 ...
	I0318 12:51:03.251983 1130820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:03.252018 1130820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:03.268663 1130820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44271
	I0318 12:51:03.269173 1130820 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:03.269665 1130820 main.go:141] libmachine: Using API Version  1
	I0318 12:51:03.269686 1130820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:03.270030 1130820 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:03.270230 1130820 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:51:03.271860 1130820 status.go:330] ha-328109-m02 host status = "Stopped" (err=<nil>)
	I0318 12:51:03.271876 1130820 status.go:343] host is not running, skipping remaining checks
	I0318 12:51:03.271884 1130820 status.go:257] ha-328109-m02 status: &{Name:ha-328109-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:51:03.271904 1130820 status.go:255] checking status of ha-328109-m03 ...
	I0318 12:51:03.272216 1130820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:03.272259 1130820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:03.286878 1130820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46181
	I0318 12:51:03.287276 1130820 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:03.287792 1130820 main.go:141] libmachine: Using API Version  1
	I0318 12:51:03.287815 1130820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:03.288143 1130820 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:03.288321 1130820 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:51:03.289748 1130820 status.go:330] ha-328109-m03 host status = "Running" (err=<nil>)
	I0318 12:51:03.289764 1130820 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:51:03.290048 1130820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:03.290081 1130820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:03.304918 1130820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38179
	I0318 12:51:03.305359 1130820 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:03.305879 1130820 main.go:141] libmachine: Using API Version  1
	I0318 12:51:03.305899 1130820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:03.306211 1130820 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:03.306383 1130820 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:51:03.309054 1130820 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:51:03.309472 1130820 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:51:03.309501 1130820 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:51:03.309649 1130820 host.go:66] Checking if "ha-328109-m03" exists ...
	I0318 12:51:03.309943 1130820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:03.309978 1130820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:03.324487 1130820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45859
	I0318 12:51:03.325024 1130820 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:03.325515 1130820 main.go:141] libmachine: Using API Version  1
	I0318 12:51:03.325543 1130820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:03.325926 1130820 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:03.326114 1130820 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:51:03.326315 1130820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:51:03.326340 1130820 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:51:03.328972 1130820 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:51:03.329459 1130820 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:51:03.329487 1130820 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:51:03.329621 1130820 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:51:03.329813 1130820 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:51:03.329970 1130820 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:51:03.330092 1130820 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:51:03.417053 1130820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:51:03.438402 1130820 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:51:03.438442 1130820 api_server.go:166] Checking apiserver status ...
	I0318 12:51:03.438490 1130820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:51:03.456148 1130820 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup
	W0318 12:51:03.473488 1130820 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1490/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:51:03.473549 1130820 ssh_runner.go:195] Run: ls
	I0318 12:51:03.478500 1130820 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:51:03.483774 1130820 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:51:03.483801 1130820 status.go:422] ha-328109-m03 apiserver status = Running (err=<nil>)
	I0318 12:51:03.483812 1130820 status.go:257] ha-328109-m03 status: &{Name:ha-328109-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:51:03.483862 1130820 status.go:255] checking status of ha-328109-m04 ...
	I0318 12:51:03.484196 1130820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:03.484234 1130820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:03.499795 1130820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43671
	I0318 12:51:03.500246 1130820 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:03.500784 1130820 main.go:141] libmachine: Using API Version  1
	I0318 12:51:03.500805 1130820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:03.501138 1130820 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:03.501344 1130820 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:51:03.502894 1130820 status.go:330] ha-328109-m04 host status = "Running" (err=<nil>)
	I0318 12:51:03.502916 1130820 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:51:03.503254 1130820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:03.503294 1130820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:03.518441 1130820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44747
	I0318 12:51:03.518820 1130820 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:03.519296 1130820 main.go:141] libmachine: Using API Version  1
	I0318 12:51:03.519324 1130820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:03.519666 1130820 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:03.519869 1130820 main.go:141] libmachine: (ha-328109-m04) Calling .GetIP
	I0318 12:51:03.522557 1130820 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:51:03.522977 1130820 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:51:03.522999 1130820 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:51:03.523138 1130820 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:51:03.523429 1130820 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:03.523463 1130820 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:03.537850 1130820 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I0318 12:51:03.538297 1130820 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:03.538773 1130820 main.go:141] libmachine: Using API Version  1
	I0318 12:51:03.538795 1130820 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:03.539090 1130820 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:03.539309 1130820 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:51:03.539498 1130820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:51:03.539519 1130820 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:51:03.542425 1130820 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:51:03.542897 1130820 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:51:03.542926 1130820 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:51:03.543093 1130820 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:51:03.543252 1130820 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:51:03.543388 1130820 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:51:03.543571 1130820 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	I0318 12:51:03.620314 1130820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:51:03.636259 1130820 status.go:257] ha-328109-m04 status: &{Name:ha-328109-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-328109 -n ha-328109
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-328109 logs -n 25: (1.588023341s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109:/home/docker/cp-test_ha-328109-m03_ha-328109.txt                       |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109 sudo cat                                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m03_ha-328109.txt                                 |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m02:/home/docker/cp-test_ha-328109-m03_ha-328109-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m02 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m03_ha-328109-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04:/home/docker/cp-test_ha-328109-m03_ha-328109-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m04 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m03_ha-328109-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp testdata/cp-test.txt                                                | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1988805859/001/cp-test_ha-328109-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109:/home/docker/cp-test_ha-328109-m04_ha-328109.txt                       |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109 sudo cat                                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109.txt                                 |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m02:/home/docker/cp-test_ha-328109-m04_ha-328109-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m02 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03:/home/docker/cp-test_ha-328109-m04_ha-328109-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m03 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-328109 node stop m02 -v=7                                                     | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-328109 node start m02 -v=7                                                    | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:42:32
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:42:32.893274 1125718 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:42:32.893417 1125718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:42:32.893429 1125718 out.go:304] Setting ErrFile to fd 2...
	I0318 12:42:32.893436 1125718 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:42:32.893642 1125718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:42:32.894210 1125718 out.go:298] Setting JSON to false
	I0318 12:42:32.895115 1125718 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":15900,"bootTime":1710749853,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:42:32.895185 1125718 start.go:139] virtualization: kvm guest
	I0318 12:42:32.897324 1125718 out.go:177] * [ha-328109] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:42:32.899161 1125718 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 12:42:32.899200 1125718 notify.go:220] Checking for updates...
	I0318 12:42:32.900581 1125718 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:42:32.902066 1125718 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:42:32.903366 1125718 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:42:32.904691 1125718 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 12:42:32.906034 1125718 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:42:32.907495 1125718 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:42:32.941434 1125718 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 12:42:32.942748 1125718 start.go:297] selected driver: kvm2
	I0318 12:42:32.942769 1125718 start.go:901] validating driver "kvm2" against <nil>
	I0318 12:42:32.942782 1125718 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:42:32.943513 1125718 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:42:32.943590 1125718 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:42:32.958284 1125718 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:42:32.958383 1125718 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:42:32.958600 1125718 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:42:32.958650 1125718 cni.go:84] Creating CNI manager for ""
	I0318 12:42:32.958664 1125718 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0318 12:42:32.958669 1125718 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0318 12:42:32.958731 1125718 start.go:340] cluster config:
	{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0318 12:42:32.958820 1125718 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:42:32.960621 1125718 out.go:177] * Starting "ha-328109" primary control-plane node in "ha-328109" cluster
	I0318 12:42:32.961853 1125718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:42:32.961885 1125718 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 12:42:32.961894 1125718 cache.go:56] Caching tarball of preloaded images
	I0318 12:42:32.961983 1125718 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 12:42:32.961996 1125718 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 12:42:32.962310 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:42:32.962340 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json: {Name:mk6731ec1f8b636473e57fa4c832d7a65e6cf7d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:42:32.962505 1125718 start.go:360] acquireMachinesLock for ha-328109: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:42:32.962541 1125718 start.go:364] duration metric: took 19.07µs to acquireMachinesLock for "ha-328109"
	I0318 12:42:32.962564 1125718 start.go:93] Provisioning new machine with config: &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:42:32.962628 1125718 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 12:42:32.964262 1125718 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 12:42:32.964426 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:42:32.964476 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:42:32.978932 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34647
	I0318 12:42:32.979368 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:42:32.979906 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:42:32.979928 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:42:32.980370 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:42:32.980585 1125718 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:42:32.980747 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:32.980877 1125718 start.go:159] libmachine.API.Create for "ha-328109" (driver="kvm2")
	I0318 12:42:32.980908 1125718 client.go:168] LocalClient.Create starting
	I0318 12:42:32.980948 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem
	I0318 12:42:32.980983 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:42:32.980999 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:42:32.981065 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem
	I0318 12:42:32.981083 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:42:32.981095 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:42:32.981108 1125718 main.go:141] libmachine: Running pre-create checks...
	I0318 12:42:32.981122 1125718 main.go:141] libmachine: (ha-328109) Calling .PreCreateCheck
	I0318 12:42:32.981478 1125718 main.go:141] libmachine: (ha-328109) Calling .GetConfigRaw
	I0318 12:42:32.981854 1125718 main.go:141] libmachine: Creating machine...
	I0318 12:42:32.981868 1125718 main.go:141] libmachine: (ha-328109) Calling .Create
	I0318 12:42:32.981979 1125718 main.go:141] libmachine: (ha-328109) Creating KVM machine...
	I0318 12:42:32.983166 1125718 main.go:141] libmachine: (ha-328109) DBG | found existing default KVM network
	I0318 12:42:32.983871 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:32.983700 1125741 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ad0}
	I0318 12:42:32.983923 1125718 main.go:141] libmachine: (ha-328109) DBG | created network xml: 
	I0318 12:42:32.983949 1125718 main.go:141] libmachine: (ha-328109) DBG | <network>
	I0318 12:42:32.983964 1125718 main.go:141] libmachine: (ha-328109) DBG |   <name>mk-ha-328109</name>
	I0318 12:42:32.983973 1125718 main.go:141] libmachine: (ha-328109) DBG |   <dns enable='no'/>
	I0318 12:42:32.983986 1125718 main.go:141] libmachine: (ha-328109) DBG |   
	I0318 12:42:32.983999 1125718 main.go:141] libmachine: (ha-328109) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 12:42:32.984011 1125718 main.go:141] libmachine: (ha-328109) DBG |     <dhcp>
	I0318 12:42:32.984028 1125718 main.go:141] libmachine: (ha-328109) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 12:42:32.984039 1125718 main.go:141] libmachine: (ha-328109) DBG |     </dhcp>
	I0318 12:42:32.984050 1125718 main.go:141] libmachine: (ha-328109) DBG |   </ip>
	I0318 12:42:32.984059 1125718 main.go:141] libmachine: (ha-328109) DBG |   
	I0318 12:42:32.984066 1125718 main.go:141] libmachine: (ha-328109) DBG | </network>
	I0318 12:42:32.984072 1125718 main.go:141] libmachine: (ha-328109) DBG | 
	I0318 12:42:32.989522 1125718 main.go:141] libmachine: (ha-328109) DBG | trying to create private KVM network mk-ha-328109 192.168.39.0/24...
	I0318 12:42:33.054160 1125718 main.go:141] libmachine: (ha-328109) DBG | private KVM network mk-ha-328109 192.168.39.0/24 created
	I0318 12:42:33.054199 1125718 main.go:141] libmachine: (ha-328109) Setting up store path in /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109 ...
	I0318 12:42:33.054217 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:33.054107 1125741 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:42:33.054249 1125718 main.go:141] libmachine: (ha-328109) Building disk image from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 12:42:33.054268 1125718 main.go:141] libmachine: (ha-328109) Downloading /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:42:33.312864 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:33.312752 1125741 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa...
	I0318 12:42:33.446094 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:33.445958 1125741 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/ha-328109.rawdisk...
	I0318 12:42:33.446128 1125718 main.go:141] libmachine: (ha-328109) DBG | Writing magic tar header
	I0318 12:42:33.446138 1125718 main.go:141] libmachine: (ha-328109) DBG | Writing SSH key tar header
	I0318 12:42:33.446176 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:33.446142 1125741 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109 ...
	I0318 12:42:33.446270 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109
	I0318 12:42:33.446306 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines
	I0318 12:42:33.446332 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109 (perms=drwx------)
	I0318 12:42:33.446350 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines (perms=drwxr-xr-x)
	I0318 12:42:33.446362 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube (perms=drwxr-xr-x)
	I0318 12:42:33.446372 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:42:33.446381 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816
	I0318 12:42:33.446395 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816 (perms=drwxrwxr-x)
	I0318 12:42:33.446409 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 12:42:33.446430 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home/jenkins
	I0318 12:42:33.446442 1125718 main.go:141] libmachine: (ha-328109) DBG | Checking permissions on dir: /home
	I0318 12:42:33.446455 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 12:42:33.446471 1125718 main.go:141] libmachine: (ha-328109) DBG | Skipping /home - not owner
	I0318 12:42:33.446483 1125718 main.go:141] libmachine: (ha-328109) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 12:42:33.446495 1125718 main.go:141] libmachine: (ha-328109) Creating domain...
	I0318 12:42:33.447565 1125718 main.go:141] libmachine: (ha-328109) define libvirt domain using xml: 
	I0318 12:42:33.447591 1125718 main.go:141] libmachine: (ha-328109) <domain type='kvm'>
	I0318 12:42:33.447617 1125718 main.go:141] libmachine: (ha-328109)   <name>ha-328109</name>
	I0318 12:42:33.447635 1125718 main.go:141] libmachine: (ha-328109)   <memory unit='MiB'>2200</memory>
	I0318 12:42:33.447643 1125718 main.go:141] libmachine: (ha-328109)   <vcpu>2</vcpu>
	I0318 12:42:33.447649 1125718 main.go:141] libmachine: (ha-328109)   <features>
	I0318 12:42:33.447658 1125718 main.go:141] libmachine: (ha-328109)     <acpi/>
	I0318 12:42:33.447668 1125718 main.go:141] libmachine: (ha-328109)     <apic/>
	I0318 12:42:33.447675 1125718 main.go:141] libmachine: (ha-328109)     <pae/>
	I0318 12:42:33.447686 1125718 main.go:141] libmachine: (ha-328109)     
	I0318 12:42:33.447698 1125718 main.go:141] libmachine: (ha-328109)   </features>
	I0318 12:42:33.447712 1125718 main.go:141] libmachine: (ha-328109)   <cpu mode='host-passthrough'>
	I0318 12:42:33.447723 1125718 main.go:141] libmachine: (ha-328109)   
	I0318 12:42:33.447732 1125718 main.go:141] libmachine: (ha-328109)   </cpu>
	I0318 12:42:33.447741 1125718 main.go:141] libmachine: (ha-328109)   <os>
	I0318 12:42:33.447763 1125718 main.go:141] libmachine: (ha-328109)     <type>hvm</type>
	I0318 12:42:33.447773 1125718 main.go:141] libmachine: (ha-328109)     <boot dev='cdrom'/>
	I0318 12:42:33.447786 1125718 main.go:141] libmachine: (ha-328109)     <boot dev='hd'/>
	I0318 12:42:33.447801 1125718 main.go:141] libmachine: (ha-328109)     <bootmenu enable='no'/>
	I0318 12:42:33.447810 1125718 main.go:141] libmachine: (ha-328109)   </os>
	I0318 12:42:33.447818 1125718 main.go:141] libmachine: (ha-328109)   <devices>
	I0318 12:42:33.447828 1125718 main.go:141] libmachine: (ha-328109)     <disk type='file' device='cdrom'>
	I0318 12:42:33.447840 1125718 main.go:141] libmachine: (ha-328109)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/boot2docker.iso'/>
	I0318 12:42:33.447850 1125718 main.go:141] libmachine: (ha-328109)       <target dev='hdc' bus='scsi'/>
	I0318 12:42:33.447905 1125718 main.go:141] libmachine: (ha-328109)       <readonly/>
	I0318 12:42:33.447937 1125718 main.go:141] libmachine: (ha-328109)     </disk>
	I0318 12:42:33.447950 1125718 main.go:141] libmachine: (ha-328109)     <disk type='file' device='disk'>
	I0318 12:42:33.447959 1125718 main.go:141] libmachine: (ha-328109)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 12:42:33.447973 1125718 main.go:141] libmachine: (ha-328109)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/ha-328109.rawdisk'/>
	I0318 12:42:33.447981 1125718 main.go:141] libmachine: (ha-328109)       <target dev='hda' bus='virtio'/>
	I0318 12:42:33.447989 1125718 main.go:141] libmachine: (ha-328109)     </disk>
	I0318 12:42:33.448002 1125718 main.go:141] libmachine: (ha-328109)     <interface type='network'>
	I0318 12:42:33.448015 1125718 main.go:141] libmachine: (ha-328109)       <source network='mk-ha-328109'/>
	I0318 12:42:33.448026 1125718 main.go:141] libmachine: (ha-328109)       <model type='virtio'/>
	I0318 12:42:33.448042 1125718 main.go:141] libmachine: (ha-328109)     </interface>
	I0318 12:42:33.448061 1125718 main.go:141] libmachine: (ha-328109)     <interface type='network'>
	I0318 12:42:33.448074 1125718 main.go:141] libmachine: (ha-328109)       <source network='default'/>
	I0318 12:42:33.448084 1125718 main.go:141] libmachine: (ha-328109)       <model type='virtio'/>
	I0318 12:42:33.448093 1125718 main.go:141] libmachine: (ha-328109)     </interface>
	I0318 12:42:33.448103 1125718 main.go:141] libmachine: (ha-328109)     <serial type='pty'>
	I0318 12:42:33.448115 1125718 main.go:141] libmachine: (ha-328109)       <target port='0'/>
	I0318 12:42:33.448125 1125718 main.go:141] libmachine: (ha-328109)     </serial>
	I0318 12:42:33.448154 1125718 main.go:141] libmachine: (ha-328109)     <console type='pty'>
	I0318 12:42:33.448188 1125718 main.go:141] libmachine: (ha-328109)       <target type='serial' port='0'/>
	I0318 12:42:33.448202 1125718 main.go:141] libmachine: (ha-328109)     </console>
	I0318 12:42:33.448214 1125718 main.go:141] libmachine: (ha-328109)     <rng model='virtio'>
	I0318 12:42:33.448228 1125718 main.go:141] libmachine: (ha-328109)       <backend model='random'>/dev/random</backend>
	I0318 12:42:33.448238 1125718 main.go:141] libmachine: (ha-328109)     </rng>
	I0318 12:42:33.448246 1125718 main.go:141] libmachine: (ha-328109)     
	I0318 12:42:33.448255 1125718 main.go:141] libmachine: (ha-328109)     
	I0318 12:42:33.448261 1125718 main.go:141] libmachine: (ha-328109)   </devices>
	I0318 12:42:33.448267 1125718 main.go:141] libmachine: (ha-328109) </domain>
	I0318 12:42:33.448280 1125718 main.go:141] libmachine: (ha-328109) 
	I0318 12:42:33.452728 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:22:36:b5 in network default
	I0318 12:42:33.453339 1125718 main.go:141] libmachine: (ha-328109) Ensuring networks are active...
	I0318 12:42:33.453363 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:33.454088 1125718 main.go:141] libmachine: (ha-328109) Ensuring network default is active
	I0318 12:42:33.454456 1125718 main.go:141] libmachine: (ha-328109) Ensuring network mk-ha-328109 is active
	I0318 12:42:33.454922 1125718 main.go:141] libmachine: (ha-328109) Getting domain xml...
	I0318 12:42:33.455681 1125718 main.go:141] libmachine: (ha-328109) Creating domain...
	I0318 12:42:34.615763 1125718 main.go:141] libmachine: (ha-328109) Waiting to get IP...
	I0318 12:42:34.616795 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:34.617216 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:34.617257 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:34.617201 1125741 retry.go:31] will retry after 279.162867ms: waiting for machine to come up
	I0318 12:42:34.897719 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:34.898195 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:34.898218 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:34.898166 1125741 retry.go:31] will retry after 243.384633ms: waiting for machine to come up
	I0318 12:42:35.143663 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:35.144109 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:35.144136 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:35.144064 1125741 retry.go:31] will retry after 336.699426ms: waiting for machine to come up
	I0318 12:42:35.482738 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:35.483145 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:35.483175 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:35.483112 1125741 retry.go:31] will retry after 562.433686ms: waiting for machine to come up
	I0318 12:42:36.046830 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:36.047255 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:36.047286 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:36.047199 1125741 retry.go:31] will retry after 503.93378ms: waiting for machine to come up
	I0318 12:42:36.553139 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:36.554216 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:36.554265 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:36.554160 1125741 retry.go:31] will retry after 939.355373ms: waiting for machine to come up
	I0318 12:42:37.494846 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:37.495264 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:37.495312 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:37.495221 1125741 retry.go:31] will retry after 1.103667704s: waiting for machine to come up
	I0318 12:42:38.599992 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:38.600441 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:38.600467 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:38.600389 1125741 retry.go:31] will retry after 1.276924143s: waiting for machine to come up
	I0318 12:42:39.878845 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:39.879292 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:39.879325 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:39.879245 1125741 retry.go:31] will retry after 1.648278378s: waiting for machine to come up
	I0318 12:42:41.530396 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:41.530841 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:41.530871 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:41.530780 1125741 retry.go:31] will retry after 1.745965009s: waiting for machine to come up
	I0318 12:42:43.278652 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:43.279091 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:43.279137 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:43.279052 1125741 retry.go:31] will retry after 2.777428365s: waiting for machine to come up
	I0318 12:42:46.058676 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:46.059168 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:46.059194 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:46.059133 1125741 retry.go:31] will retry after 3.40869009s: waiting for machine to come up
	I0318 12:42:49.469432 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:49.469877 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:49.469989 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:49.469870 1125741 retry.go:31] will retry after 3.566417297s: waiting for machine to come up
	I0318 12:42:53.037358 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:53.037800 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find current IP address of domain ha-328109 in network mk-ha-328109
	I0318 12:42:53.037841 1125718 main.go:141] libmachine: (ha-328109) DBG | I0318 12:42:53.037762 1125741 retry.go:31] will retry after 5.033131353s: waiting for machine to come up
	I0318 12:42:58.072520 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.072957 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has current primary IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.072972 1125718 main.go:141] libmachine: (ha-328109) Found IP for machine: 192.168.39.253
	I0318 12:42:58.072980 1125718 main.go:141] libmachine: (ha-328109) Reserving static IP address...
	I0318 12:42:58.073514 1125718 main.go:141] libmachine: (ha-328109) DBG | unable to find host DHCP lease matching {name: "ha-328109", mac: "52:54:00:53:6b:a9", ip: "192.168.39.253"} in network mk-ha-328109
	I0318 12:42:58.145837 1125718 main.go:141] libmachine: (ha-328109) DBG | Getting to WaitForSSH function...
	I0318 12:42:58.145872 1125718 main.go:141] libmachine: (ha-328109) Reserved static IP address: 192.168.39.253
	I0318 12:42:58.145885 1125718 main.go:141] libmachine: (ha-328109) Waiting for SSH to be available...
	I0318 12:42:58.148648 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.149051 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:minikube Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.149075 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.149216 1125718 main.go:141] libmachine: (ha-328109) DBG | Using SSH client type: external
	I0318 12:42:58.149241 1125718 main.go:141] libmachine: (ha-328109) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa (-rw-------)
	I0318 12:42:58.149299 1125718 main.go:141] libmachine: (ha-328109) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.253 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:42:58.149323 1125718 main.go:141] libmachine: (ha-328109) DBG | About to run SSH command:
	I0318 12:42:58.149346 1125718 main.go:141] libmachine: (ha-328109) DBG | exit 0
	I0318 12:42:58.273026 1125718 main.go:141] libmachine: (ha-328109) DBG | SSH cmd err, output: <nil>: 
	I0318 12:42:58.273298 1125718 main.go:141] libmachine: (ha-328109) KVM machine creation complete!
	I0318 12:42:58.273768 1125718 main.go:141] libmachine: (ha-328109) Calling .GetConfigRaw
	I0318 12:42:58.274300 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:58.274552 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:58.274716 1125718 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 12:42:58.274735 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:42:58.276172 1125718 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 12:42:58.276188 1125718 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 12:42:58.276194 1125718 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 12:42:58.276200 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.278366 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.278730 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.278763 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.278938 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:58.279142 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.279304 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.279439 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:58.279593 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:58.279877 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:58.279892 1125718 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 12:42:58.379768 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:42:58.379793 1125718 main.go:141] libmachine: Detecting the provisioner...
	I0318 12:42:58.379804 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.382812 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.383148 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.383172 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.383331 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:58.383563 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.383729 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.383876 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:58.384006 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:58.384182 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:58.384194 1125718 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 12:42:58.485386 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 12:42:58.485520 1125718 main.go:141] libmachine: found compatible host: buildroot
	I0318 12:42:58.485531 1125718 main.go:141] libmachine: Provisioning with buildroot...
	I0318 12:42:58.485539 1125718 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:42:58.485792 1125718 buildroot.go:166] provisioning hostname "ha-328109"
	I0318 12:42:58.485820 1125718 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:42:58.486080 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.488787 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.489168 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.489199 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.489380 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:58.489562 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.489733 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.489895 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:58.490075 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:58.490294 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:58.490313 1125718 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-328109 && echo "ha-328109" | sudo tee /etc/hostname
	I0318 12:42:58.608020 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-328109
	
	I0318 12:42:58.608058 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.610726 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.611084 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.611125 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.611274 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:58.611476 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.611682 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.611847 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:58.612031 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:58.612262 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:58.612280 1125718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-328109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-328109/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-328109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:42:58.726590 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:42:58.726624 1125718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 12:42:58.726684 1125718 buildroot.go:174] setting up certificates
	I0318 12:42:58.726706 1125718 provision.go:84] configureAuth start
	I0318 12:42:58.726723 1125718 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:42:58.727009 1125718 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:42:58.729588 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.729936 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.729973 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.730146 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.732161 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.732493 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.732516 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.732643 1125718 provision.go:143] copyHostCerts
	I0318 12:42:58.732699 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:42:58.732739 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 12:42:58.732751 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:42:58.732832 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 12:42:58.732959 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:42:58.732986 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 12:42:58.732996 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:42:58.733035 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 12:42:58.733110 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:42:58.733131 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 12:42:58.733140 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:42:58.733176 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 12:42:58.733256 1125718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.ha-328109 san=[127.0.0.1 192.168.39.253 ha-328109 localhost minikube]
	I0318 12:42:58.891821 1125718 provision.go:177] copyRemoteCerts
	I0318 12:42:58.891890 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:42:58.891922 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:58.894835 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.895175 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:58.895204 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:58.895396 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:58.895585 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:58.895742 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:58.895868 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:42:58.979289 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 12:42:58.979356 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:42:59.007758 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 12:42:59.007836 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0318 12:42:59.033766 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 12:42:59.033836 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 12:42:59.059468 1125718 provision.go:87] duration metric: took 332.748413ms to configureAuth
	I0318 12:42:59.059494 1125718 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:42:59.059651 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:42:59.059795 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:59.062390 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.062748 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.062778 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.062924 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:59.063124 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.063320 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.063491 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:59.063656 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:59.063827 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:59.063851 1125718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 12:42:59.339998 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 12:42:59.340032 1125718 main.go:141] libmachine: Checking connection to Docker...
	I0318 12:42:59.340055 1125718 main.go:141] libmachine: (ha-328109) Calling .GetURL
	I0318 12:42:59.341306 1125718 main.go:141] libmachine: (ha-328109) DBG | Using libvirt version 6000000
	I0318 12:42:59.343425 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.343752 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.343806 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.343929 1125718 main.go:141] libmachine: Docker is up and running!
	I0318 12:42:59.343945 1125718 main.go:141] libmachine: Reticulating splines...
	I0318 12:42:59.343953 1125718 client.go:171] duration metric: took 26.363034911s to LocalClient.Create
	I0318 12:42:59.343987 1125718 start.go:167] duration metric: took 26.363101491s to libmachine.API.Create "ha-328109"
	I0318 12:42:59.343997 1125718 start.go:293] postStartSetup for "ha-328109" (driver="kvm2")
	I0318 12:42:59.344007 1125718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:42:59.344024 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:59.344243 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:42:59.344268 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:59.346277 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.346548 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.346582 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.346699 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:59.346894 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.347072 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:59.347266 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:42:59.427524 1125718 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:42:59.432462 1125718 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:42:59.432499 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 12:42:59.432567 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 12:42:59.432654 1125718 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 12:42:59.432667 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /etc/ssl/certs/11141362.pem
	I0318 12:42:59.432797 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:42:59.442592 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:42:59.469019 1125718 start.go:296] duration metric: took 125.007436ms for postStartSetup
	I0318 12:42:59.469065 1125718 main.go:141] libmachine: (ha-328109) Calling .GetConfigRaw
	I0318 12:42:59.469773 1125718 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:42:59.472478 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.472842 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.472869 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.473167 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:42:59.473395 1125718 start.go:128] duration metric: took 26.510754925s to createHost
	I0318 12:42:59.473423 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:59.475764 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.476083 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.476104 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.476225 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:59.476431 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.476603 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.476743 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:59.476873 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:42:59.477031 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:42:59.477047 1125718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:42:59.577227 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710765779.563409318
	
	I0318 12:42:59.577252 1125718 fix.go:216] guest clock: 1710765779.563409318
	I0318 12:42:59.577260 1125718 fix.go:229] Guest: 2024-03-18 12:42:59.563409318 +0000 UTC Remote: 2024-03-18 12:42:59.473409893 +0000 UTC m=+26.630089998 (delta=89.999425ms)
	I0318 12:42:59.577308 1125718 fix.go:200] guest clock delta is within tolerance: 89.999425ms
	I0318 12:42:59.577317 1125718 start.go:83] releasing machines lock for "ha-328109", held for 26.614764446s
	I0318 12:42:59.577342 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:59.577641 1125718 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:42:59.580162 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.580574 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.580601 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.580810 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:59.581276 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:59.581469 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:42:59.581591 1125718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:42:59.581637 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:59.581651 1125718 ssh_runner.go:195] Run: cat /version.json
	I0318 12:42:59.581681 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:42:59.584224 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.584414 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.584656 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.584684 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.584778 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:59.584806 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:42:59.584830 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:42:59.584953 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.585016 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:42:59.585184 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:42:59.585198 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:59.585374 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:42:59.585372 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:42:59.585507 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:42:59.683378 1125718 ssh_runner.go:195] Run: systemctl --version
	I0318 12:42:59.689815 1125718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 12:42:59.848150 1125718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 12:42:59.855282 1125718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:42:59.855355 1125718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:42:59.872299 1125718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:42:59.872320 1125718 start.go:494] detecting cgroup driver to use...
	I0318 12:42:59.872396 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:42:59.890688 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:42:59.905298 1125718 docker.go:217] disabling cri-docker service (if available) ...
	I0318 12:42:59.905355 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 12:42:59.919060 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 12:42:59.932778 1125718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 12:43:00.049114 1125718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 12:43:00.182331 1125718 docker.go:233] disabling docker service ...
	I0318 12:43:00.182396 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 12:43:00.198331 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 12:43:00.212991 1125718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 12:43:00.348866 1125718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 12:43:00.469879 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 12:43:00.485742 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:43:00.506025 1125718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 12:43:00.506083 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:43:00.517952 1125718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 12:43:00.518013 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:43:00.530178 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:43:00.541859 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:43:00.553792 1125718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:43:00.565862 1125718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:43:00.576407 1125718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 12:43:00.576451 1125718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 12:43:00.590759 1125718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:43:00.601582 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:00.718655 1125718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 12:43:00.870021 1125718 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 12:43:00.870091 1125718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 12:43:00.877167 1125718 start.go:562] Will wait 60s for crictl version
	I0318 12:43:00.877236 1125718 ssh_runner.go:195] Run: which crictl
	I0318 12:43:00.881823 1125718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:43:00.923854 1125718 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 12:43:00.923930 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:43:00.955517 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:43:00.988604 1125718 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 12:43:00.990186 1125718 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:43:00.992525 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:00.992824 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:43:00.992853 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:00.993022 1125718 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 12:43:00.997695 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:43:01.011699 1125718 kubeadm.go:877] updating cluster {Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 12:43:01.011827 1125718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:43:01.011892 1125718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:43:01.047347 1125718 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 12:43:01.047437 1125718 ssh_runner.go:195] Run: which lz4
	I0318 12:43:01.051747 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0318 12:43:01.051842 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 12:43:01.056408 1125718 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 12:43:01.056446 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 12:43:02.876595 1125718 crio.go:444] duration metric: took 1.82478261s to copy over tarball
	I0318 12:43:02.876680 1125718 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 12:43:05.445107 1125718 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.568390906s)
	I0318 12:43:05.445141 1125718 crio.go:451] duration metric: took 2.568510194s to extract the tarball
	I0318 12:43:05.445151 1125718 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 12:43:05.488343 1125718 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:43:05.538446 1125718 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 12:43:05.538475 1125718 cache_images.go:84] Images are preloaded, skipping loading
	I0318 12:43:05.538484 1125718 kubeadm.go:928] updating node { 192.168.39.253 8443 v1.28.4 crio true true} ...
	I0318 12:43:05.538616 1125718 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-328109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:43:05.538696 1125718 ssh_runner.go:195] Run: crio config
	I0318 12:43:05.588974 1125718 cni.go:84] Creating CNI manager for ""
	I0318 12:43:05.589000 1125718 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 12:43:05.589012 1125718 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 12:43:05.589038 1125718 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.253 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-328109 NodeName:ha-328109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 12:43:05.589267 1125718 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-328109"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 12:43:05.589299 1125718 kube-vip.go:111] generating kube-vip config ...
	I0318 12:43:05.589345 1125718 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 12:43:05.607828 1125718 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 12:43:05.607991 1125718 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 12:43:05.608051 1125718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:43:05.619777 1125718 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 12:43:05.619841 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 12:43:05.630602 1125718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0318 12:43:05.648883 1125718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:43:05.666806 1125718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0318 12:43:05.684911 1125718 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 12:43:05.702918 1125718 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 12:43:05.707333 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:43:05.721730 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:43:05.844199 1125718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:43:05.865494 1125718 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109 for IP: 192.168.39.253
	I0318 12:43:05.865521 1125718 certs.go:194] generating shared ca certs ...
	I0318 12:43:05.865541 1125718 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:05.865749 1125718 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 12:43:05.865833 1125718 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 12:43:05.865854 1125718 certs.go:256] generating profile certs ...
	I0318 12:43:05.865939 1125718 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key
	I0318 12:43:05.865958 1125718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt with IP's: []
	I0318 12:43:06.059925 1125718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt ...
	I0318 12:43:06.059957 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt: {Name:mk98d1028bb046ec14cfc2db8eaed8adeb0938fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.060157 1125718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key ...
	I0318 12:43:06.060172 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key: {Name:mkad7c16b97c067b718bfe3b7a476b91257e5668 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.060295 1125718 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.06b165d1
	I0318 12:43:06.060322 1125718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.06b165d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.253 192.168.39.254]
	I0318 12:43:06.137070 1125718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.06b165d1 ...
	I0318 12:43:06.137102 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.06b165d1: {Name:mk3e37e6b5fb439da6c5ece9a6decbb4962ddeae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.137279 1125718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.06b165d1 ...
	I0318 12:43:06.137301 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.06b165d1: {Name:mk97eab05a308922396449b4f891c0c3075c0118 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.137396 1125718 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.06b165d1 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt
	I0318 12:43:06.137521 1125718 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.06b165d1 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key
	I0318 12:43:06.137607 1125718 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key
	I0318 12:43:06.137626 1125718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt with IP's: []
	I0318 12:43:06.201657 1125718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt ...
	I0318 12:43:06.201692 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt: {Name:mkf1fee34716d4ec97d785b76997dc5ca77c33e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.201908 1125718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key ...
	I0318 12:43:06.201926 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key: {Name:mkbe491bf5b0ea170f6d25c9f206dd2996a733e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:06.202029 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:43:06.202055 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:43:06.202077 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:43:06.202100 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:43:06.202117 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:43:06.202130 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:43:06.202146 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:43:06.202165 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:43:06.202232 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 12:43:06.202290 1125718 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 12:43:06.202304 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 12:43:06.202345 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 12:43:06.202374 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 12:43:06.202403 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 12:43:06.202459 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:43:06.202498 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /usr/share/ca-certificates/11141362.pem
	I0318 12:43:06.202518 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:06.202536 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem -> /usr/share/ca-certificates/1114136.pem
	I0318 12:43:06.203187 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:43:06.231563 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:43:06.259372 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:43:06.286691 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:43:06.313297 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 12:43:06.342092 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 12:43:06.368547 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:43:06.395181 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 12:43:06.422955 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 12:43:06.449465 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:43:06.476590 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 12:43:06.503893 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 12:43:06.524061 1125718 ssh_runner.go:195] Run: openssl version
	I0318 12:43:06.530679 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 12:43:06.544560 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 12:43:06.550018 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 12:43:06.550065 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 12:43:06.556606 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:43:06.570092 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:43:06.582834 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:06.588037 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:06.588086 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:43:06.594336 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:43:06.607303 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 12:43:06.620246 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 12:43:06.625361 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 12:43:06.625413 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 12:43:06.631604 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 12:43:06.643973 1125718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:43:06.648604 1125718 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:43:06.648662 1125718 kubeadm.go:391] StartCluster: {Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:43:06.648748 1125718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 12:43:06.648828 1125718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 12:43:06.700166 1125718 cri.go:89] found id: ""
	I0318 12:43:06.700240 1125718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 12:43:06.719130 1125718 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 12:43:06.735317 1125718 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 12:43:06.751003 1125718 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 12:43:06.751024 1125718 kubeadm.go:156] found existing configuration files:
	
	I0318 12:43:06.751065 1125718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 12:43:06.761703 1125718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 12:43:06.761748 1125718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 12:43:06.772582 1125718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 12:43:06.783256 1125718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 12:43:06.783310 1125718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 12:43:06.794478 1125718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 12:43:06.805372 1125718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 12:43:06.805430 1125718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 12:43:06.817218 1125718 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 12:43:06.826995 1125718 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 12:43:06.827050 1125718 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 12:43:06.837914 1125718 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 12:43:06.948251 1125718 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 12:43:06.948313 1125718 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 12:43:07.085088 1125718 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 12:43:07.085240 1125718 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 12:43:07.085364 1125718 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 12:43:07.307399 1125718 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 12:43:07.415769 1125718 out.go:204]   - Generating certificates and keys ...
	I0318 12:43:07.415882 1125718 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 12:43:07.415963 1125718 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 12:43:07.548702 1125718 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 12:43:07.595062 1125718 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 12:43:07.842592 1125718 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 12:43:07.910806 1125718 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 12:43:08.058724 1125718 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 12:43:08.058857 1125718 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-328109 localhost] and IPs [192.168.39.253 127.0.0.1 ::1]
	I0318 12:43:08.280941 1125718 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 12:43:08.281223 1125718 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-328109 localhost] and IPs [192.168.39.253 127.0.0.1 ::1]
	I0318 12:43:08.675729 1125718 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 12:43:08.848717 1125718 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 12:43:08.915219 1125718 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 12:43:08.915399 1125718 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 12:43:09.279825 1125718 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 12:43:09.339098 1125718 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 12:43:09.494758 1125718 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 12:43:09.734925 1125718 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 12:43:09.736742 1125718 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 12:43:09.742603 1125718 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 12:43:09.744602 1125718 out.go:204]   - Booting up control plane ...
	I0318 12:43:09.744708 1125718 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 12:43:09.744800 1125718 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 12:43:09.744857 1125718 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 12:43:09.763160 1125718 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 12:43:09.763939 1125718 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 12:43:09.763985 1125718 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 12:43:09.915668 1125718 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 12:43:19.535392 1125718 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.620407 seconds
	I0318 12:43:19.535517 1125718 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 12:43:19.562002 1125718 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 12:43:20.114490 1125718 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 12:43:20.114731 1125718 kubeadm.go:309] [mark-control-plane] Marking the node ha-328109 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 12:43:20.630454 1125718 kubeadm.go:309] [bootstrap-token] Using token: fi8sec.f0o3w4sfps43kmi2
	I0318 12:43:20.632029 1125718 out.go:204]   - Configuring RBAC rules ...
	I0318 12:43:20.632153 1125718 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 12:43:20.638344 1125718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 12:43:20.648575 1125718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 12:43:20.652191 1125718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 12:43:20.655760 1125718 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 12:43:20.660143 1125718 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 12:43:20.716031 1125718 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 12:43:20.970658 1125718 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 12:43:21.081598 1125718 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 12:43:21.083185 1125718 kubeadm.go:309] 
	I0318 12:43:21.083260 1125718 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 12:43:21.083271 1125718 kubeadm.go:309] 
	I0318 12:43:21.083374 1125718 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 12:43:21.083401 1125718 kubeadm.go:309] 
	I0318 12:43:21.083441 1125718 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 12:43:21.083516 1125718 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 12:43:21.083598 1125718 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 12:43:21.083613 1125718 kubeadm.go:309] 
	I0318 12:43:21.083715 1125718 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 12:43:21.083734 1125718 kubeadm.go:309] 
	I0318 12:43:21.083825 1125718 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 12:43:21.083845 1125718 kubeadm.go:309] 
	I0318 12:43:21.083934 1125718 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 12:43:21.084053 1125718 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 12:43:21.084167 1125718 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 12:43:21.084185 1125718 kubeadm.go:309] 
	I0318 12:43:21.084319 1125718 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 12:43:21.084453 1125718 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 12:43:21.084463 1125718 kubeadm.go:309] 
	I0318 12:43:21.084553 1125718 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fi8sec.f0o3w4sfps43kmi2 \
	I0318 12:43:21.084688 1125718 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 12:43:21.084722 1125718 kubeadm.go:309] 	--control-plane 
	I0318 12:43:21.084731 1125718 kubeadm.go:309] 
	I0318 12:43:21.084852 1125718 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 12:43:21.084862 1125718 kubeadm.go:309] 
	I0318 12:43:21.084960 1125718 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fi8sec.f0o3w4sfps43kmi2 \
	I0318 12:43:21.085105 1125718 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 12:43:21.086261 1125718 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 12:43:21.086293 1125718 cni.go:84] Creating CNI manager for ""
	I0318 12:43:21.086307 1125718 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0318 12:43:21.088108 1125718 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0318 12:43:21.089501 1125718 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0318 12:43:21.112180 1125718 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0318 12:43:21.112203 1125718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0318 12:43:21.199282 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0318 12:43:22.196147 1125718 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 12:43:22.196247 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:22.196247 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-328109 minikube.k8s.io/updated_at=2024_03_18T12_43_22_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-328109 minikube.k8s.io/primary=true
	I0318 12:43:22.210588 1125718 ops.go:34] apiserver oom_adj: -16
	I0318 12:43:22.379332 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:22.879356 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:23.379974 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:23.880167 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:24.379327 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:24.879818 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:25.380309 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:25.880218 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:26.379974 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:26.879374 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:27.380212 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:27.879608 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:28.379586 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:28.879340 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:29.379342 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:29.880361 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:30.379853 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:30.879547 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:31.379737 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 12:43:31.507187 1125718 kubeadm.go:1107] duration metric: took 9.3110211s to wait for elevateKubeSystemPrivileges
	W0318 12:43:31.507230 1125718 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 12:43:31.507240 1125718 kubeadm.go:393] duration metric: took 24.85858693s to StartCluster
	I0318 12:43:31.507264 1125718 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:31.507355 1125718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:43:31.508126 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:43:31.508398 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 12:43:31.508417 1125718 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 12:43:31.508486 1125718 addons.go:69] Setting storage-provisioner=true in profile "ha-328109"
	I0318 12:43:31.508513 1125718 addons.go:234] Setting addon storage-provisioner=true in "ha-328109"
	I0318 12:43:31.508532 1125718 addons.go:69] Setting default-storageclass=true in profile "ha-328109"
	I0318 12:43:31.508574 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:43:31.508603 1125718 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-328109"
	I0318 12:43:31.508671 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:43:31.508389 1125718 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:43:31.508727 1125718 start.go:240] waiting for startup goroutines ...
	I0318 12:43:31.509020 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:31.509040 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:31.509070 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:31.509224 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:31.524299 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46755
	I0318 12:43:31.524409 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0318 12:43:31.524812 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:31.524868 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:31.525347 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:31.525369 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:31.525507 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:31.525533 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:31.525828 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:31.525853 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:31.526065 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:43:31.526360 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:31.526400 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:31.528502 1125718 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:43:31.528879 1125718 kapi.go:59] client config for ha-328109: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt", KeyFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key", CAFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 12:43:31.529504 1125718 cert_rotation.go:137] Starting client certificate rotation controller
	I0318 12:43:31.529773 1125718 addons.go:234] Setting addon default-storageclass=true in "ha-328109"
	I0318 12:43:31.529821 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:43:31.530209 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:31.530256 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:31.542348 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0318 12:43:31.542880 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:31.543576 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:31.543600 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:31.544037 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:31.544253 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:43:31.545543 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37975
	I0318 12:43:31.545963 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:31.546406 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:43:31.546501 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:31.546520 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:31.548743 1125718 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 12:43:31.546859 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:31.550221 1125718 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:43:31.550245 1125718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 12:43:31.550264 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:43:31.550507 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:31.550543 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:31.553144 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:31.553615 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:43:31.553646 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:31.553759 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:43:31.553931 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:43:31.554100 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:43:31.554212 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:43:31.565892 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45463
	I0318 12:43:31.566251 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:31.566676 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:31.566696 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:31.566999 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:31.567188 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:43:31.568775 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:43:31.569037 1125718 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 12:43:31.569053 1125718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 12:43:31.569067 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:43:31.571988 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:31.572402 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:43:31.572441 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:43:31.572579 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:43:31.572763 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:43:31.572919 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:43:31.573063 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:43:31.702231 1125718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 12:43:31.727244 1125718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 12:43:31.734958 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 12:43:32.686170 1125718 main.go:141] libmachine: Making call to close driver server
	I0318 12:43:32.686194 1125718 main.go:141] libmachine: (ha-328109) Calling .Close
	I0318 12:43:32.686225 1125718 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 12:43:32.686286 1125718 main.go:141] libmachine: Making call to close driver server
	I0318 12:43:32.686308 1125718 main.go:141] libmachine: (ha-328109) Calling .Close
	I0318 12:43:32.686534 1125718 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:43:32.686551 1125718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:43:32.686560 1125718 main.go:141] libmachine: Making call to close driver server
	I0318 12:43:32.686568 1125718 main.go:141] libmachine: (ha-328109) Calling .Close
	I0318 12:43:32.686667 1125718 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:43:32.686685 1125718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:43:32.686694 1125718 main.go:141] libmachine: Making call to close driver server
	I0318 12:43:32.686708 1125718 main.go:141] libmachine: (ha-328109) Calling .Close
	I0318 12:43:32.686872 1125718 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:43:32.686886 1125718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:43:32.686998 1125718 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0318 12:43:32.687015 1125718 round_trippers.go:469] Request Headers:
	I0318 12:43:32.687025 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:32.687030 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:43:32.687040 1125718 main.go:141] libmachine: (ha-328109) DBG | Closing plugin on server side
	I0318 12:43:32.687109 1125718 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:43:32.687149 1125718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:43:32.699529 1125718 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0318 12:43:32.700139 1125718 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0318 12:43:32.700157 1125718 round_trippers.go:469] Request Headers:
	I0318 12:43:32.700164 1125718 round_trippers.go:473]     Content-Type: application/json
	I0318 12:43:32.700167 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:43:32.700169 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:43:32.702859 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:43:32.703005 1125718 main.go:141] libmachine: Making call to close driver server
	I0318 12:43:32.703017 1125718 main.go:141] libmachine: (ha-328109) Calling .Close
	I0318 12:43:32.703293 1125718 main.go:141] libmachine: Successfully made call to close driver server
	I0318 12:43:32.703326 1125718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 12:43:32.705165 1125718 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 12:43:32.706458 1125718 addons.go:505] duration metric: took 1.198043636s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 12:43:32.706490 1125718 start.go:245] waiting for cluster config update ...
	I0318 12:43:32.706501 1125718 start.go:254] writing updated cluster config ...
	I0318 12:43:32.708205 1125718 out.go:177] 
	I0318 12:43:32.709707 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:43:32.709776 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:43:32.711333 1125718 out.go:177] * Starting "ha-328109-m02" control-plane node in "ha-328109" cluster
	I0318 12:43:32.712448 1125718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:43:32.712474 1125718 cache.go:56] Caching tarball of preloaded images
	I0318 12:43:32.712584 1125718 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 12:43:32.712600 1125718 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 12:43:32.712676 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:43:32.712846 1125718 start.go:360] acquireMachinesLock for ha-328109-m02: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:43:32.712887 1125718 start.go:364] duration metric: took 23.508µs to acquireMachinesLock for "ha-328109-m02"
	I0318 12:43:32.712907 1125718 start.go:93] Provisioning new machine with config: &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:43:32.712972 1125718 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0318 12:43:32.714457 1125718 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 12:43:32.714536 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:43:32.714572 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:43:32.729074 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36541
	I0318 12:43:32.729506 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:43:32.729973 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:43:32.729995 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:43:32.730340 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:43:32.730540 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetMachineName
	I0318 12:43:32.730708 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:32.730861 1125718 start.go:159] libmachine.API.Create for "ha-328109" (driver="kvm2")
	I0318 12:43:32.730892 1125718 client.go:168] LocalClient.Create starting
	I0318 12:43:32.730921 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem
	I0318 12:43:32.730960 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:43:32.730979 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:43:32.731046 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem
	I0318 12:43:32.731070 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:43:32.731088 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:43:32.731116 1125718 main.go:141] libmachine: Running pre-create checks...
	I0318 12:43:32.731128 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .PreCreateCheck
	I0318 12:43:32.731316 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetConfigRaw
	I0318 12:43:32.731704 1125718 main.go:141] libmachine: Creating machine...
	I0318 12:43:32.731720 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .Create
	I0318 12:43:32.731875 1125718 main.go:141] libmachine: (ha-328109-m02) Creating KVM machine...
	I0318 12:43:32.733153 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found existing default KVM network
	I0318 12:43:32.733356 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found existing private KVM network mk-ha-328109
	I0318 12:43:32.733514 1125718 main.go:141] libmachine: (ha-328109-m02) Setting up store path in /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02 ...
	I0318 12:43:32.733543 1125718 main.go:141] libmachine: (ha-328109-m02) Building disk image from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 12:43:32.733589 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:32.733486 1126085 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:43:32.733788 1125718 main.go:141] libmachine: (ha-328109-m02) Downloading /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:43:32.986625 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:32.986490 1126085 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa...
	I0318 12:43:33.068219 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:33.068080 1126085 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/ha-328109-m02.rawdisk...
	I0318 12:43:33.068258 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Writing magic tar header
	I0318 12:43:33.068272 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Writing SSH key tar header
	I0318 12:43:33.068284 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:33.068215 1126085 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02 ...
	I0318 12:43:33.068390 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02
	I0318 12:43:33.068435 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02 (perms=drwx------)
	I0318 12:43:33.068449 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines
	I0318 12:43:33.068471 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:43:33.068480 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816
	I0318 12:43:33.068490 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 12:43:33.068507 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home/jenkins
	I0318 12:43:33.068518 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines (perms=drwxr-xr-x)
	I0318 12:43:33.068531 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube (perms=drwxr-xr-x)
	I0318 12:43:33.068545 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816 (perms=drwxrwxr-x)
	I0318 12:43:33.068557 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 12:43:33.068568 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Checking permissions on dir: /home
	I0318 12:43:33.068592 1125718 main.go:141] libmachine: (ha-328109-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 12:43:33.068608 1125718 main.go:141] libmachine: (ha-328109-m02) Creating domain...
	I0318 12:43:33.068621 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Skipping /home - not owner
	I0318 12:43:33.069682 1125718 main.go:141] libmachine: (ha-328109-m02) define libvirt domain using xml: 
	I0318 12:43:33.069706 1125718 main.go:141] libmachine: (ha-328109-m02) <domain type='kvm'>
	I0318 12:43:33.069717 1125718 main.go:141] libmachine: (ha-328109-m02)   <name>ha-328109-m02</name>
	I0318 12:43:33.069730 1125718 main.go:141] libmachine: (ha-328109-m02)   <memory unit='MiB'>2200</memory>
	I0318 12:43:33.069742 1125718 main.go:141] libmachine: (ha-328109-m02)   <vcpu>2</vcpu>
	I0318 12:43:33.069753 1125718 main.go:141] libmachine: (ha-328109-m02)   <features>
	I0318 12:43:33.069761 1125718 main.go:141] libmachine: (ha-328109-m02)     <acpi/>
	I0318 12:43:33.069770 1125718 main.go:141] libmachine: (ha-328109-m02)     <apic/>
	I0318 12:43:33.069778 1125718 main.go:141] libmachine: (ha-328109-m02)     <pae/>
	I0318 12:43:33.069788 1125718 main.go:141] libmachine: (ha-328109-m02)     
	I0318 12:43:33.069796 1125718 main.go:141] libmachine: (ha-328109-m02)   </features>
	I0318 12:43:33.069810 1125718 main.go:141] libmachine: (ha-328109-m02)   <cpu mode='host-passthrough'>
	I0318 12:43:33.069836 1125718 main.go:141] libmachine: (ha-328109-m02)   
	I0318 12:43:33.069855 1125718 main.go:141] libmachine: (ha-328109-m02)   </cpu>
	I0318 12:43:33.069905 1125718 main.go:141] libmachine: (ha-328109-m02)   <os>
	I0318 12:43:33.069932 1125718 main.go:141] libmachine: (ha-328109-m02)     <type>hvm</type>
	I0318 12:43:33.069943 1125718 main.go:141] libmachine: (ha-328109-m02)     <boot dev='cdrom'/>
	I0318 12:43:33.069953 1125718 main.go:141] libmachine: (ha-328109-m02)     <boot dev='hd'/>
	I0318 12:43:33.069967 1125718 main.go:141] libmachine: (ha-328109-m02)     <bootmenu enable='no'/>
	I0318 12:43:33.069977 1125718 main.go:141] libmachine: (ha-328109-m02)   </os>
	I0318 12:43:33.069987 1125718 main.go:141] libmachine: (ha-328109-m02)   <devices>
	I0318 12:43:33.070017 1125718 main.go:141] libmachine: (ha-328109-m02)     <disk type='file' device='cdrom'>
	I0318 12:43:33.070033 1125718 main.go:141] libmachine: (ha-328109-m02)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/boot2docker.iso'/>
	I0318 12:43:33.070047 1125718 main.go:141] libmachine: (ha-328109-m02)       <target dev='hdc' bus='scsi'/>
	I0318 12:43:33.070058 1125718 main.go:141] libmachine: (ha-328109-m02)       <readonly/>
	I0318 12:43:33.070069 1125718 main.go:141] libmachine: (ha-328109-m02)     </disk>
	I0318 12:43:33.070080 1125718 main.go:141] libmachine: (ha-328109-m02)     <disk type='file' device='disk'>
	I0318 12:43:33.070093 1125718 main.go:141] libmachine: (ha-328109-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 12:43:33.070108 1125718 main.go:141] libmachine: (ha-328109-m02)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/ha-328109-m02.rawdisk'/>
	I0318 12:43:33.070119 1125718 main.go:141] libmachine: (ha-328109-m02)       <target dev='hda' bus='virtio'/>
	I0318 12:43:33.070139 1125718 main.go:141] libmachine: (ha-328109-m02)     </disk>
	I0318 12:43:33.070161 1125718 main.go:141] libmachine: (ha-328109-m02)     <interface type='network'>
	I0318 12:43:33.070171 1125718 main.go:141] libmachine: (ha-328109-m02)       <source network='mk-ha-328109'/>
	I0318 12:43:33.070182 1125718 main.go:141] libmachine: (ha-328109-m02)       <model type='virtio'/>
	I0318 12:43:33.070194 1125718 main.go:141] libmachine: (ha-328109-m02)     </interface>
	I0318 12:43:33.070205 1125718 main.go:141] libmachine: (ha-328109-m02)     <interface type='network'>
	I0318 12:43:33.070219 1125718 main.go:141] libmachine: (ha-328109-m02)       <source network='default'/>
	I0318 12:43:33.070226 1125718 main.go:141] libmachine: (ha-328109-m02)       <model type='virtio'/>
	I0318 12:43:33.070251 1125718 main.go:141] libmachine: (ha-328109-m02)     </interface>
	I0318 12:43:33.070274 1125718 main.go:141] libmachine: (ha-328109-m02)     <serial type='pty'>
	I0318 12:43:33.070287 1125718 main.go:141] libmachine: (ha-328109-m02)       <target port='0'/>
	I0318 12:43:33.070297 1125718 main.go:141] libmachine: (ha-328109-m02)     </serial>
	I0318 12:43:33.070306 1125718 main.go:141] libmachine: (ha-328109-m02)     <console type='pty'>
	I0318 12:43:33.070318 1125718 main.go:141] libmachine: (ha-328109-m02)       <target type='serial' port='0'/>
	I0318 12:43:33.070328 1125718 main.go:141] libmachine: (ha-328109-m02)     </console>
	I0318 12:43:33.070335 1125718 main.go:141] libmachine: (ha-328109-m02)     <rng model='virtio'>
	I0318 12:43:33.070348 1125718 main.go:141] libmachine: (ha-328109-m02)       <backend model='random'>/dev/random</backend>
	I0318 12:43:33.070362 1125718 main.go:141] libmachine: (ha-328109-m02)     </rng>
	I0318 12:43:33.070370 1125718 main.go:141] libmachine: (ha-328109-m02)     
	I0318 12:43:33.070379 1125718 main.go:141] libmachine: (ha-328109-m02)     
	I0318 12:43:33.070388 1125718 main.go:141] libmachine: (ha-328109-m02)   </devices>
	I0318 12:43:33.070397 1125718 main.go:141] libmachine: (ha-328109-m02) </domain>
	I0318 12:43:33.070409 1125718 main.go:141] libmachine: (ha-328109-m02) 
	I0318 12:43:33.077496 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:cb:19:9f in network default
	I0318 12:43:33.078120 1125718 main.go:141] libmachine: (ha-328109-m02) Ensuring networks are active...
	I0318 12:43:33.078147 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:33.078924 1125718 main.go:141] libmachine: (ha-328109-m02) Ensuring network default is active
	I0318 12:43:33.079244 1125718 main.go:141] libmachine: (ha-328109-m02) Ensuring network mk-ha-328109 is active
	I0318 12:43:33.079670 1125718 main.go:141] libmachine: (ha-328109-m02) Getting domain xml...
	I0318 12:43:33.080417 1125718 main.go:141] libmachine: (ha-328109-m02) Creating domain...
	I0318 12:43:34.270519 1125718 main.go:141] libmachine: (ha-328109-m02) Waiting to get IP...
	I0318 12:43:34.271515 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:34.271960 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:34.272042 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:34.271943 1126085 retry.go:31] will retry after 217.561939ms: waiting for machine to come up
	I0318 12:43:34.491422 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:34.491931 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:34.491961 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:34.491888 1126085 retry.go:31] will retry after 331.528679ms: waiting for machine to come up
	I0318 12:43:34.825355 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:34.825869 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:34.825902 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:34.825819 1126085 retry.go:31] will retry after 333.550695ms: waiting for machine to come up
	I0318 12:43:35.161311 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:35.161753 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:35.161780 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:35.161669 1126085 retry.go:31] will retry after 412.760783ms: waiting for machine to come up
	I0318 12:43:35.576353 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:35.576818 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:35.576860 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:35.576756 1126085 retry.go:31] will retry after 592.586387ms: waiting for machine to come up
	I0318 12:43:36.170720 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:36.171261 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:36.171288 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:36.171227 1126085 retry.go:31] will retry after 796.14891ms: waiting for machine to come up
	I0318 12:43:36.969073 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:36.969526 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:36.969558 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:36.969475 1126085 retry.go:31] will retry after 1.038014819s: waiting for machine to come up
	I0318 12:43:38.008945 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:38.009370 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:38.009403 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:38.009329 1126085 retry.go:31] will retry after 1.268175144s: waiting for machine to come up
	I0318 12:43:39.279858 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:39.280351 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:39.280385 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:39.280304 1126085 retry.go:31] will retry after 1.56218765s: waiting for machine to come up
	I0318 12:43:40.845119 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:40.845518 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:40.845543 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:40.845472 1126085 retry.go:31] will retry after 2.041106676s: waiting for machine to come up
	I0318 12:43:42.888092 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:42.888602 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:42.888637 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:42.888540 1126085 retry.go:31] will retry after 1.790770419s: waiting for machine to come up
	I0318 12:43:44.681508 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:44.682058 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:44.682090 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:44.682013 1126085 retry.go:31] will retry after 2.583742639s: waiting for machine to come up
	I0318 12:43:47.268831 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:47.269314 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:47.269346 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:47.269239 1126085 retry.go:31] will retry after 3.343018853s: waiting for machine to come up
	I0318 12:43:50.615998 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:50.616403 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find current IP address of domain ha-328109-m02 in network mk-ha-328109
	I0318 12:43:50.616428 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | I0318 12:43:50.616358 1126085 retry.go:31] will retry after 4.746728365s: waiting for machine to come up
	I0318 12:43:55.366283 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:55.366789 1125718 main.go:141] libmachine: (ha-328109-m02) Found IP for machine: 192.168.39.246
	I0318 12:43:55.366830 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has current primary IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:55.366836 1125718 main.go:141] libmachine: (ha-328109-m02) Reserving static IP address...
	I0318 12:43:55.367161 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find host DHCP lease matching {name: "ha-328109-m02", mac: "52:54:00:8c:b0:42", ip: "192.168.39.246"} in network mk-ha-328109
	I0318 12:43:55.441786 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Getting to WaitForSSH function...
	I0318 12:43:55.441829 1125718 main.go:141] libmachine: (ha-328109-m02) Reserved static IP address: 192.168.39.246
	I0318 12:43:55.441863 1125718 main.go:141] libmachine: (ha-328109-m02) Waiting for SSH to be available...
	I0318 12:43:55.444551 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:55.445016 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109
	I0318 12:43:55.445047 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | unable to find defined IP address of network mk-ha-328109 interface with MAC address 52:54:00:8c:b0:42
	I0318 12:43:55.445157 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Using SSH client type: external
	I0318 12:43:55.445200 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa (-rw-------)
	I0318 12:43:55.445235 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:43:55.445250 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | About to run SSH command:
	I0318 12:43:55.445277 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | exit 0
	I0318 12:43:55.448798 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | SSH cmd err, output: exit status 255: 
	I0318 12:43:55.448821 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0318 12:43:55.448828 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | command : exit 0
	I0318 12:43:55.448833 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | err     : exit status 255
	I0318 12:43:55.448845 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | output  : 
	I0318 12:43:58.449369 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Getting to WaitForSSH function...
	I0318 12:43:58.452205 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.452685 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:58.452724 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.452851 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Using SSH client type: external
	I0318 12:43:58.452880 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa (-rw-------)
	I0318 12:43:58.452918 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.246 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:43:58.452930 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | About to run SSH command:
	I0318 12:43:58.452945 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | exit 0
	I0318 12:43:58.580414 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | SSH cmd err, output: <nil>: 
	I0318 12:43:58.580656 1125718 main.go:141] libmachine: (ha-328109-m02) KVM machine creation complete!
	I0318 12:43:58.581324 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetConfigRaw
	I0318 12:43:58.581918 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:58.582151 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:58.582359 1125718 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 12:43:58.582374 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:43:58.583601 1125718 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 12:43:58.583615 1125718 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 12:43:58.583621 1125718 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 12:43:58.583626 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:58.585891 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.586214 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:58.586237 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.586367 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:58.586545 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.586714 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.586866 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:58.587053 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:58.587334 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:58.587350 1125718 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 12:43:58.699683 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:43:58.699711 1125718 main.go:141] libmachine: Detecting the provisioner...
	I0318 12:43:58.699719 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:58.702567 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.702974 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:58.703003 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.703170 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:58.703387 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.703565 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.703681 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:58.703904 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:58.704084 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:58.704096 1125718 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 12:43:58.818089 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 12:43:58.818186 1125718 main.go:141] libmachine: found compatible host: buildroot
	I0318 12:43:58.818196 1125718 main.go:141] libmachine: Provisioning with buildroot...
	I0318 12:43:58.818204 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetMachineName
	I0318 12:43:58.818569 1125718 buildroot.go:166] provisioning hostname "ha-328109-m02"
	I0318 12:43:58.818600 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetMachineName
	I0318 12:43:58.818843 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:58.822672 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.823042 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:58.823073 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.823212 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:58.823436 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.823604 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.823736 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:58.823940 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:58.824142 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:58.824170 1125718 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-328109-m02 && echo "ha-328109-m02" | sudo tee /etc/hostname
	I0318 12:43:58.951811 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-328109-m02
	
	I0318 12:43:58.951850 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:58.954600 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.955009 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:58.955043 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:58.955214 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:58.955446 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.955594 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:58.955701 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:58.955835 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:58.956041 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:58.956067 1125718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-328109-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-328109-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-328109-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:43:59.078710 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:43:59.078758 1125718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 12:43:59.078784 1125718 buildroot.go:174] setting up certificates
	I0318 12:43:59.078799 1125718 provision.go:84] configureAuth start
	I0318 12:43:59.078817 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetMachineName
	I0318 12:43:59.079120 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:43:59.082111 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.082579 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.082610 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.082758 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.085173 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.085539 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.085568 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.085721 1125718 provision.go:143] copyHostCerts
	I0318 12:43:59.085758 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:43:59.085808 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 12:43:59.085822 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:43:59.085923 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 12:43:59.086039 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:43:59.086066 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 12:43:59.086075 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:43:59.086115 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 12:43:59.086194 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:43:59.086221 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 12:43:59.086227 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:43:59.086264 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 12:43:59.086350 1125718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.ha-328109-m02 san=[127.0.0.1 192.168.39.246 ha-328109-m02 localhost minikube]
	I0318 12:43:59.164641 1125718 provision.go:177] copyRemoteCerts
	I0318 12:43:59.164719 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:43:59.164752 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.167335 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.167761 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.167800 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.167941 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.168138 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.168266 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.168392 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	I0318 12:43:59.256598 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 12:43:59.256686 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:43:59.284362 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 12:43:59.284460 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 12:43:59.310409 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 12:43:59.310498 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 12:43:59.336202 1125718 provision.go:87] duration metric: took 257.380191ms to configureAuth
	I0318 12:43:59.336242 1125718 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:43:59.336462 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:43:59.336584 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.339229 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.339572 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.339607 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.339859 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.340064 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.340234 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.340378 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.340538 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:59.340707 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:59.340727 1125718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 12:43:59.627029 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 12:43:59.627063 1125718 main.go:141] libmachine: Checking connection to Docker...
	I0318 12:43:59.627071 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetURL
	I0318 12:43:59.628413 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | Using libvirt version 6000000
	I0318 12:43:59.630858 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.631225 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.631268 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.631491 1125718 main.go:141] libmachine: Docker is up and running!
	I0318 12:43:59.631511 1125718 main.go:141] libmachine: Reticulating splines...
	I0318 12:43:59.631519 1125718 client.go:171] duration metric: took 26.900616699s to LocalClient.Create
	I0318 12:43:59.631542 1125718 start.go:167] duration metric: took 26.900683726s to libmachine.API.Create "ha-328109"
	I0318 12:43:59.631553 1125718 start.go:293] postStartSetup for "ha-328109-m02" (driver="kvm2")
	I0318 12:43:59.631563 1125718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:43:59.631591 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:59.631837 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:43:59.631866 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.634073 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.634465 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.634493 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.634672 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.634838 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.635006 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.635141 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	I0318 12:43:59.719880 1125718 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:43:59.724734 1125718 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:43:59.724765 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 12:43:59.724836 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 12:43:59.724941 1125718 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 12:43:59.724955 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /etc/ssl/certs/11141362.pem
	I0318 12:43:59.725063 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:43:59.735849 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:43:59.763701 1125718 start.go:296] duration metric: took 132.132457ms for postStartSetup
	I0318 12:43:59.763785 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetConfigRaw
	I0318 12:43:59.764500 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:43:59.766957 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.767368 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.767398 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.767661 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:43:59.767873 1125718 start.go:128] duration metric: took 27.054886871s to createHost
	I0318 12:43:59.767902 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.770002 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.770239 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.770265 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.770374 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.770568 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.770733 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.770854 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.771011 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:43:59.771179 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.246 22 <nil> <nil>}
	I0318 12:43:59.771190 1125718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:43:59.881610 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710765839.855499588
	
	I0318 12:43:59.881635 1125718 fix.go:216] guest clock: 1710765839.855499588
	I0318 12:43:59.881643 1125718 fix.go:229] Guest: 2024-03-18 12:43:59.855499588 +0000 UTC Remote: 2024-03-18 12:43:59.767886325 +0000 UTC m=+86.924566388 (delta=87.613263ms)
	I0318 12:43:59.881660 1125718 fix.go:200] guest clock delta is within tolerance: 87.613263ms
	I0318 12:43:59.881665 1125718 start.go:83] releasing machines lock for "ha-328109-m02", held for 27.168768398s
	I0318 12:43:59.881687 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:59.881991 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:43:59.884387 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.884709 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.884738 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.887185 1125718 out.go:177] * Found network options:
	I0318 12:43:59.888664 1125718 out.go:177]   - NO_PROXY=192.168.39.253
	W0318 12:43:59.890067 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:43:59.890093 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:59.890590 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:59.890776 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:43:59.890894 1125718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:43:59.890937 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	W0318 12:43:59.891024 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:43:59.891121 1125718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 12:43:59.891150 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:43:59.893802 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.894029 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.894197 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.894227 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.894402 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:43:59.894417 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.894424 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:43:59.894565 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:43:59.894641 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.894716 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:43:59.894837 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.894882 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:43:59.894982 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	I0318 12:43:59.895018 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	I0318 12:44:00.137444 1125718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 12:44:00.144976 1125718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:44:00.145065 1125718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:44:00.164076 1125718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:44:00.164108 1125718 start.go:494] detecting cgroup driver to use...
	I0318 12:44:00.164200 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:44:00.182516 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:44:00.197623 1125718 docker.go:217] disabling cri-docker service (if available) ...
	I0318 12:44:00.197696 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 12:44:00.211897 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 12:44:00.227180 1125718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 12:44:00.345865 1125718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 12:44:00.502719 1125718 docker.go:233] disabling docker service ...
	I0318 12:44:00.502809 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 12:44:00.519062 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 12:44:00.533347 1125718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 12:44:00.696212 1125718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 12:44:00.847684 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 12:44:00.863668 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:44:00.884184 1125718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 12:44:00.884265 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:44:00.896228 1125718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 12:44:00.896307 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:44:00.908261 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:44:00.920135 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:44:00.931813 1125718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:44:00.943845 1125718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:44:00.954328 1125718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 12:44:00.954392 1125718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 12:44:00.968796 1125718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:44:00.980362 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:44:01.108291 1125718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 12:44:01.258438 1125718 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 12:44:01.258528 1125718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 12:44:01.264184 1125718 start.go:562] Will wait 60s for crictl version
	I0318 12:44:01.264242 1125718 ssh_runner.go:195] Run: which crictl
	I0318 12:44:01.268679 1125718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:44:01.309083 1125718 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 12:44:01.309175 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:44:01.341688 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:44:01.380685 1125718 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 12:44:01.382079 1125718 out.go:177]   - env NO_PROXY=192.168.39.253
	I0318 12:44:01.383399 1125718 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:44:01.386301 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:44:01.386676 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:43:48 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:44:01.386723 1125718 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:44:01.386967 1125718 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 12:44:01.391996 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:44:01.409426 1125718 mustload.go:65] Loading cluster: ha-328109
	I0318 12:44:01.409694 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:44:01.410161 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:44:01.410228 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:44:01.425513 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
	I0318 12:44:01.425961 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:44:01.426459 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:44:01.426481 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:44:01.426843 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:44:01.427092 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:44:01.428595 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:44:01.428929 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:44:01.428971 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:44:01.443790 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38301
	I0318 12:44:01.444217 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:44:01.444747 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:44:01.444767 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:44:01.445079 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:44:01.445301 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:44:01.445442 1125718 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109 for IP: 192.168.39.246
	I0318 12:44:01.445455 1125718 certs.go:194] generating shared ca certs ...
	I0318 12:44:01.445471 1125718 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:44:01.445601 1125718 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 12:44:01.445640 1125718 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 12:44:01.445657 1125718 certs.go:256] generating profile certs ...
	I0318 12:44:01.445745 1125718 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key
	I0318 12:44:01.445770 1125718 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.57b3cb16
	I0318 12:44:01.445785 1125718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.57b3cb16 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.253 192.168.39.246 192.168.39.254]
	I0318 12:44:01.606268 1125718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.57b3cb16 ...
	I0318 12:44:01.606317 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.57b3cb16: {Name:mk2a28886f0cf302e67691064ed3f588dbab180f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:44:01.606591 1125718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.57b3cb16 ...
	I0318 12:44:01.606622 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.57b3cb16: {Name:mkd5a53db774063ba21335a4cd03a90a402d3183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:44:01.606756 1125718 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.57b3cb16 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt
	I0318 12:44:01.606948 1125718 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.57b3cb16 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key
	I0318 12:44:01.607103 1125718 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key
	I0318 12:44:01.607121 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:44:01.607134 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:44:01.607149 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:44:01.607162 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:44:01.607175 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:44:01.607189 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:44:01.607207 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:44:01.607219 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:44:01.607268 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 12:44:01.607296 1125718 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 12:44:01.607307 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 12:44:01.607327 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 12:44:01.607350 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 12:44:01.607374 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 12:44:01.607416 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:44:01.607442 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:44:01.607456 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem -> /usr/share/ca-certificates/1114136.pem
	I0318 12:44:01.607470 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /usr/share/ca-certificates/11141362.pem
	I0318 12:44:01.607503 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:44:01.610632 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:44:01.611078 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:44:01.611115 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:44:01.611239 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:44:01.611436 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:44:01.611610 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:44:01.611782 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:44:01.684688 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 12:44:01.690075 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 12:44:01.703204 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 12:44:01.707974 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 12:44:01.720905 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 12:44:01.726186 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 12:44:01.740107 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 12:44:01.744851 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0318 12:44:01.758148 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 12:44:01.762774 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 12:44:01.775671 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 12:44:01.786778 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 12:44:01.799266 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:44:01.827937 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:44:01.856117 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:44:01.883951 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:44:01.911446 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 12:44:01.939348 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 12:44:01.967444 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:44:01.994558 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 12:44:02.021422 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:44:02.048126 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 12:44:02.076337 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 12:44:02.105324 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 12:44:02.124464 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 12:44:02.144320 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 12:44:02.165003 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0318 12:44:02.185268 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 12:44:02.204456 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 12:44:02.223428 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 12:44:02.244452 1125718 ssh_runner.go:195] Run: openssl version
	I0318 12:44:02.252358 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:44:02.265498 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:44:02.270971 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:44:02.271039 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:44:02.277641 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:44:02.289405 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 12:44:02.301135 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 12:44:02.306218 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 12:44:02.306278 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 12:44:02.312875 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 12:44:02.325791 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 12:44:02.337824 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 12:44:02.343152 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 12:44:02.343221 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 12:44:02.349879 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:44:02.362348 1125718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:44:02.367336 1125718 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:44:02.367487 1125718 kubeadm.go:928] updating node {m02 192.168.39.246 8443 v1.28.4 crio true true} ...
	I0318 12:44:02.367627 1125718 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-328109-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.246
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:44:02.367700 1125718 kube-vip.go:111] generating kube-vip config ...
	I0318 12:44:02.367755 1125718 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 12:44:02.388716 1125718 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 12:44:02.388806 1125718 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 12:44:02.388861 1125718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:44:02.399783 1125718 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 12:44:02.399848 1125718 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 12:44:02.410764 1125718 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0318 12:44:02.410796 1125718 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 12:44:02.410825 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:44:02.410823 1125718 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0318 12:44:02.410912 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:44:02.416963 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 12:44:02.416999 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 12:44:03.510507 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:44:03.510618 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:44:03.517672 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 12:44:03.517711 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 12:44:04.164904 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:44:04.181748 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:44:04.181880 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:44:04.187045 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 12:44:04.187095 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 12:44:04.702348 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 12:44:04.713314 1125718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 12:44:04.732471 1125718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:44:04.751338 1125718 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 12:44:04.769620 1125718 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 12:44:04.774028 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:44:04.787446 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:44:04.926444 1125718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:44:04.946200 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:44:04.946580 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:44:04.946658 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:44:04.962384 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33939
	I0318 12:44:04.962863 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:44:04.963381 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:44:04.963405 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:44:04.963710 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:44:04.963898 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:44:04.964061 1125718 start.go:316] joinCluster: &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:44:04.964158 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 12:44:04.964186 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:44:04.967207 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:44:04.967685 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:44:04.967718 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:44:04.967824 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:44:04.968018 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:44:04.968200 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:44:04.968357 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:44:05.151602 1125718 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:44:05.151654 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zv8kcu.ksllqv02tca6xo0j --discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-328109-m02 --control-plane --apiserver-advertise-address=192.168.39.246 --apiserver-bind-port=8443"
	I0318 12:44:39.138740 1125718 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zv8kcu.ksllqv02tca6xo0j --discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-328109-m02 --control-plane --apiserver-advertise-address=192.168.39.246 --apiserver-bind-port=8443": (33.987054913s)
	I0318 12:44:39.138786 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 12:44:39.701427 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-328109-m02 minikube.k8s.io/updated_at=2024_03_18T12_44_39_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-328109 minikube.k8s.io/primary=false
	I0318 12:44:39.834529 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-328109-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 12:44:39.989005 1125718 start.go:318] duration metric: took 35.024938427s to joinCluster
	I0318 12:44:39.989090 1125718 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:44:39.990643 1125718 out.go:177] * Verifying Kubernetes components...
	I0318 12:44:39.989430 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:44:39.992082 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:44:40.172689 1125718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:44:40.191427 1125718 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:44:40.191790 1125718 kapi.go:59] client config for ha-328109: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt", KeyFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key", CAFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 12:44:40.191918 1125718 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.253:8443
	I0318 12:44:40.192125 1125718 node_ready.go:35] waiting up to 6m0s for node "ha-328109-m02" to be "Ready" ...
	I0318 12:44:40.192223 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:40.192232 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:40.192240 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:40.192243 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:40.206292 1125718 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0318 12:44:40.692492 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:40.692529 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:40.692541 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:40.692545 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:40.698779 1125718 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:41.193389 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:41.193414 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:41.193422 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:41.193427 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:41.197225 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:41.692661 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:41.692692 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:41.692704 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:41.692710 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:41.696648 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:42.192751 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:42.192776 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:42.192784 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:42.192789 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:42.198091 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:42.198596 1125718 node_ready.go:53] node "ha-328109-m02" has status "Ready":"False"
	I0318 12:44:42.693055 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:42.693079 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:42.693087 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:42.693091 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:42.698215 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:43.192659 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:43.192683 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:43.192691 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:43.192696 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:43.197222 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:43.693379 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:43.693412 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:43.693424 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:43.693433 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:43.697459 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:44.192535 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:44.192568 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:44.192579 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:44.192583 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:44.196711 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:44.693164 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:44.693194 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:44.693208 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:44.693214 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:44.698004 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:44.698909 1125718 node_ready.go:53] node "ha-328109-m02" has status "Ready":"False"
	I0318 12:44:45.192741 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:45.192773 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:45.192785 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:45.192791 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:45.196128 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:45.692690 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:45.692720 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:45.692735 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:45.692741 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:45.699442 1125718 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:46.192344 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:46.192374 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:46.192395 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:46.192403 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:46.196573 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:46.692666 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:46.692690 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:46.692698 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:46.692702 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:46.696496 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:47.192551 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:47.192582 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.192593 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.192600 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.197378 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:47.198011 1125718 node_ready.go:53] node "ha-328109-m02" has status "Ready":"False"
	I0318 12:44:47.693331 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:47.693355 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.693363 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.693367 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.696989 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:47.697723 1125718 node_ready.go:49] node "ha-328109-m02" has status "Ready":"True"
	I0318 12:44:47.697751 1125718 node_ready.go:38] duration metric: took 7.505598296s for node "ha-328109-m02" to be "Ready" ...
	I0318 12:44:47.697763 1125718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:44:47.697888 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:47.697902 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.697913 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.697920 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.702944 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:47.710789 1125718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.710880 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c78nc
	I0318 12:44:47.710891 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.710898 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.710903 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.713878 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.714652 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:47.714669 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.714676 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.714680 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.717722 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:47.718289 1125718 pod_ready.go:92] pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:47.718309 1125718 pod_ready.go:81] duration metric: took 7.495849ms for pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.718317 1125718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.718372 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-p5xgj
	I0318 12:44:47.718383 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.718392 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.718397 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.721374 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.722079 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:47.722096 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.722103 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.722106 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.725001 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.725754 1125718 pod_ready.go:92] pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:47.725775 1125718 pod_ready.go:81] duration metric: took 7.449872ms for pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.725786 1125718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.725849 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109
	I0318 12:44:47.725860 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.725869 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.725873 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.728740 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.729338 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:47.729362 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.729372 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.729377 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.731600 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.732193 1125718 pod_ready.go:92] pod "etcd-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:47.732216 1125718 pod_ready.go:81] duration metric: took 6.421921ms for pod "etcd-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.732226 1125718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:47.732284 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m02
	I0318 12:44:47.732294 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.732304 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.732310 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.735172 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:47.736170 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:47.736184 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:47.736191 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:47.736194 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:47.738645 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:48.232575 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m02
	I0318 12:44:48.232600 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.232608 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.232612 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.236249 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:48.237186 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:48.237201 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.237208 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.237212 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.240192 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:48.732775 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m02
	I0318 12:44:48.732806 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.732817 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.732821 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.737420 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:48.738187 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:48.738204 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.738211 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.738215 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.742669 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:48.743335 1125718 pod_ready.go:92] pod "etcd-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:48.743361 1125718 pod_ready.go:81] duration metric: took 1.011124464s for pod "etcd-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:48.743375 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:48.743438 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109
	I0318 12:44:48.743449 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.743457 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.743460 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.746224 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:44:48.894148 1125718 request.go:629] Waited for 147.344101ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:48.894248 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:48.894255 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:48.894266 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:48.894277 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:48.897962 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:48.898694 1125718 pod_ready.go:92] pod "kube-apiserver-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:48.898715 1125718 pod_ready.go:81] duration metric: took 155.333585ms for pod "kube-apiserver-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:48.898724 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:49.094198 1125718 request.go:629] Waited for 195.388871ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m02
	I0318 12:44:49.094263 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m02
	I0318 12:44:49.094268 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:49.094275 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:49.094279 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:49.097625 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:49.293669 1125718 request.go:629] Waited for 195.236067ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:49.293765 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:49.293777 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:49.293786 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:49.293793 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:49.298217 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:49.299392 1125718 pod_ready.go:92] pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:49.299414 1125718 pod_ready.go:81] duration metric: took 400.680904ms for pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:49.299426 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:49.493454 1125718 request.go:629] Waited for 193.947293ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109
	I0318 12:44:49.493574 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109
	I0318 12:44:49.493590 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:49.493602 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:49.493608 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:49.497308 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:49.693548 1125718 request.go:629] Waited for 195.307312ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:49.693644 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:49.693651 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:49.693661 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:49.693669 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:49.697135 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:49.697952 1125718 pod_ready.go:92] pod "kube-controller-manager-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:49.697973 1125718 pod_ready.go:81] duration metric: took 398.539609ms for pod "kube-controller-manager-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:49.697982 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:49.894099 1125718 request.go:629] Waited for 196.011338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m02
	I0318 12:44:49.894162 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m02
	I0318 12:44:49.894168 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:49.894175 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:49.894180 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:49.898008 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:50.094307 1125718 request.go:629] Waited for 195.496656ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:50.094392 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:50.094402 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:50.094410 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.094417 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:50.097499 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:50.098204 1125718 pod_ready.go:92] pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:50.098227 1125718 pod_ready.go:81] duration metric: took 400.237571ms for pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:50.098241 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7zgrx" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:50.294349 1125718 request.go:629] Waited for 196.013287ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7zgrx
	I0318 12:44:50.294441 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7zgrx
	I0318 12:44:50.294449 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:50.294463 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:50.294477 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.297723 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:50.493914 1125718 request.go:629] Waited for 195.4196ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:50.493994 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:50.494005 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:50.494021 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.494031 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:50.497664 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:50.498503 1125718 pod_ready.go:92] pod "kube-proxy-7zgrx" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:50.498521 1125718 pod_ready.go:81] duration metric: took 400.273288ms for pod "kube-proxy-7zgrx" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:50.498531 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dhz88" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:50.693698 1125718 request.go:629] Waited for 195.074788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhz88
	I0318 12:44:50.693758 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhz88
	I0318 12:44:50.693764 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:50.693771 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.693777 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:50.698606 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:50.893836 1125718 request.go:629] Waited for 193.401828ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:50.893896 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:50.893900 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:50.893908 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:50.893912 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:50.897629 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:50.898245 1125718 pod_ready.go:92] pod "kube-proxy-dhz88" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:50.898264 1125718 pod_ready.go:81] duration metric: took 399.727875ms for pod "kube-proxy-dhz88" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:50.898274 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:51.093420 1125718 request.go:629] Waited for 195.052227ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109
	I0318 12:44:51.093485 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109
	I0318 12:44:51.093493 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.093505 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.093512 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.096646 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:51.293509 1125718 request.go:629] Waited for 196.299831ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:51.293598 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:44:51.293607 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.293618 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.293630 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.297967 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:51.298750 1125718 pod_ready.go:92] pod "kube-scheduler-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:51.298772 1125718 pod_ready.go:81] duration metric: took 400.491192ms for pod "kube-scheduler-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:51.298781 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:51.493645 1125718 request.go:629] Waited for 194.786135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m02
	I0318 12:44:51.493711 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m02
	I0318 12:44:51.493718 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.493726 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.493731 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.497700 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:51.693421 1125718 request.go:629] Waited for 195.087932ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:51.693487 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:44:51.693492 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.693500 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.693504 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.697469 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:44:51.698151 1125718 pod_ready.go:92] pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:44:51.698187 1125718 pod_ready.go:81] duration metric: took 399.397805ms for pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:44:51.698202 1125718 pod_ready.go:38] duration metric: took 4.000391721s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:44:51.698254 1125718 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:44:51.698314 1125718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:44:51.715057 1125718 api_server.go:72] duration metric: took 11.725914512s to wait for apiserver process to appear ...
	I0318 12:44:51.715080 1125718 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:44:51.715099 1125718 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0318 12:44:51.722073 1125718 api_server.go:279] https://192.168.39.253:8443/healthz returned 200:
	ok
	I0318 12:44:51.722146 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/version
	I0318 12:44:51.722151 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.722159 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.722165 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.723736 1125718 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0318 12:44:51.723860 1125718 api_server.go:141] control plane version: v1.28.4
	I0318 12:44:51.723884 1125718 api_server.go:131] duration metric: took 8.796153ms to wait for apiserver health ...
	I0318 12:44:51.723895 1125718 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:44:51.894339 1125718 request.go:629] Waited for 170.357624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:51.894406 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:51.894411 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:51.894419 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:51.894424 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:51.900782 1125718 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:44:51.905365 1125718 system_pods.go:59] 17 kube-system pods found
	I0318 12:44:51.905395 1125718 system_pods.go:61] "coredns-5dd5756b68-c78nc" [7c1159dc-6545-41a6-bb4a-75fdab519c9e] Running
	I0318 12:44:51.905400 1125718 system_pods.go:61] "coredns-5dd5756b68-p5xgj" [9a865f86-96cf-4687-9283-d2ebe5616d1a] Running
	I0318 12:44:51.905404 1125718 system_pods.go:61] "etcd-ha-328109" [46530523-a048-4fff-897d-1a59630b5533] Running
	I0318 12:44:51.905407 1125718 system_pods.go:61] "etcd-ha-328109-m02" [0ed8ba4d-7da4-4c6c-b545-5e8642214659] Running
	I0318 12:44:51.905410 1125718 system_pods.go:61] "kindnet-lc74t" [5fe4e41e-4ddd-4e39-b1e2-746a32489418] Running
	I0318 12:44:51.905413 1125718 system_pods.go:61] "kindnet-vnv5b" [fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6] Running
	I0318 12:44:51.905417 1125718 system_pods.go:61] "kube-apiserver-ha-328109" [47b1b8fb-21f6-43d7-a607-4406dfec10b7] Running
	I0318 12:44:51.905420 1125718 system_pods.go:61] "kube-apiserver-ha-328109-m02" [fcd48f5d-2278-49f3-b4f0-0cad9ae74dc7] Running
	I0318 12:44:51.905423 1125718 system_pods.go:61] "kube-controller-manager-ha-328109" [ffef70fe-841f-41c7-a61b-bb205ce2c071] Running
	I0318 12:44:51.905426 1125718 system_pods.go:61] "kube-controller-manager-ha-328109-m02" [a5ecf731-7599-44e9-b20d-924bde2de123] Running
	I0318 12:44:51.905429 1125718 system_pods.go:61] "kube-proxy-7zgrx" [6244fa40-af4d-480b-9256-db89d78b1d74] Running
	I0318 12:44:51.905432 1125718 system_pods.go:61] "kube-proxy-dhz88" [afb0afad-2b88-4abb-9039-aaf9c64ad920] Running
	I0318 12:44:51.905434 1125718 system_pods.go:61] "kube-scheduler-ha-328109" [a32fb0b4-2621-47dd-bb05-abb2e4cf928e] Running
	I0318 12:44:51.905437 1125718 system_pods.go:61] "kube-scheduler-ha-328109-m02" [14246dc3-5f5f-4d43-954c-5959db738742] Running
	I0318 12:44:51.905439 1125718 system_pods.go:61] "kube-vip-ha-328109" [40c45da5-33e0-454b-8f4c-eca1d1ec3362] Running
	I0318 12:44:51.905441 1125718 system_pods.go:61] "kube-vip-ha-328109-m02" [0c0dc71f-79d7-48f0-8a4a-4480521e5705] Running
	I0318 12:44:51.905444 1125718 system_pods.go:61] "storage-provisioner" [90ce7ae6-4ac4-4c14-b2df-1a182f4d8086] Running
	I0318 12:44:51.905450 1125718 system_pods.go:74] duration metric: took 181.546965ms to wait for pod list to return data ...
	I0318 12:44:51.905457 1125718 default_sa.go:34] waiting for default service account to be created ...
	I0318 12:44:52.093964 1125718 request.go:629] Waited for 188.409787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:44:52.094046 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:44:52.094054 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:52.094065 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:52.094082 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:52.099899 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:44:52.100194 1125718 default_sa.go:45] found service account: "default"
	I0318 12:44:52.100223 1125718 default_sa.go:55] duration metric: took 194.758383ms for default service account to be created ...
	I0318 12:44:52.100236 1125718 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 12:44:52.293770 1125718 request.go:629] Waited for 193.416795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:52.293850 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:44:52.293858 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:52.293869 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:52.293880 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:52.301716 1125718 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:44:52.307592 1125718 system_pods.go:86] 17 kube-system pods found
	I0318 12:44:52.307621 1125718 system_pods.go:89] "coredns-5dd5756b68-c78nc" [7c1159dc-6545-41a6-bb4a-75fdab519c9e] Running
	I0318 12:44:52.307626 1125718 system_pods.go:89] "coredns-5dd5756b68-p5xgj" [9a865f86-96cf-4687-9283-d2ebe5616d1a] Running
	I0318 12:44:52.307630 1125718 system_pods.go:89] "etcd-ha-328109" [46530523-a048-4fff-897d-1a59630b5533] Running
	I0318 12:44:52.307634 1125718 system_pods.go:89] "etcd-ha-328109-m02" [0ed8ba4d-7da4-4c6c-b545-5e8642214659] Running
	I0318 12:44:52.307638 1125718 system_pods.go:89] "kindnet-lc74t" [5fe4e41e-4ddd-4e39-b1e2-746a32489418] Running
	I0318 12:44:52.307642 1125718 system_pods.go:89] "kindnet-vnv5b" [fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6] Running
	I0318 12:44:52.307646 1125718 system_pods.go:89] "kube-apiserver-ha-328109" [47b1b8fb-21f6-43d7-a607-4406dfec10b7] Running
	I0318 12:44:52.307650 1125718 system_pods.go:89] "kube-apiserver-ha-328109-m02" [fcd48f5d-2278-49f3-b4f0-0cad9ae74dc7] Running
	I0318 12:44:52.307655 1125718 system_pods.go:89] "kube-controller-manager-ha-328109" [ffef70fe-841f-41c7-a61b-bb205ce2c071] Running
	I0318 12:44:52.307659 1125718 system_pods.go:89] "kube-controller-manager-ha-328109-m02" [a5ecf731-7599-44e9-b20d-924bde2de123] Running
	I0318 12:44:52.307662 1125718 system_pods.go:89] "kube-proxy-7zgrx" [6244fa40-af4d-480b-9256-db89d78b1d74] Running
	I0318 12:44:52.307666 1125718 system_pods.go:89] "kube-proxy-dhz88" [afb0afad-2b88-4abb-9039-aaf9c64ad920] Running
	I0318 12:44:52.307673 1125718 system_pods.go:89] "kube-scheduler-ha-328109" [a32fb0b4-2621-47dd-bb05-abb2e4cf928e] Running
	I0318 12:44:52.307676 1125718 system_pods.go:89] "kube-scheduler-ha-328109-m02" [14246dc3-5f5f-4d43-954c-5959db738742] Running
	I0318 12:44:52.307682 1125718 system_pods.go:89] "kube-vip-ha-328109" [40c45da5-33e0-454b-8f4c-eca1d1ec3362] Running
	I0318 12:44:52.307685 1125718 system_pods.go:89] "kube-vip-ha-328109-m02" [0c0dc71f-79d7-48f0-8a4a-4480521e5705] Running
	I0318 12:44:52.307689 1125718 system_pods.go:89] "storage-provisioner" [90ce7ae6-4ac4-4c14-b2df-1a182f4d8086] Running
	I0318 12:44:52.307696 1125718 system_pods.go:126] duration metric: took 207.453689ms to wait for k8s-apps to be running ...
	I0318 12:44:52.307706 1125718 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:44:52.307754 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:44:52.324436 1125718 system_svc.go:56] duration metric: took 16.716482ms WaitForService to wait for kubelet
	I0318 12:44:52.324477 1125718 kubeadm.go:576] duration metric: took 12.335337661s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:44:52.324505 1125718 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:44:52.493932 1125718 request.go:629] Waited for 169.333092ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes
	I0318 12:44:52.494026 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes
	I0318 12:44:52.494034 1125718 round_trippers.go:469] Request Headers:
	I0318 12:44:52.494043 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:44:52.494053 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:44:52.498708 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:44:52.499905 1125718 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:44:52.499939 1125718 node_conditions.go:123] node cpu capacity is 2
	I0318 12:44:52.499957 1125718 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:44:52.499963 1125718 node_conditions.go:123] node cpu capacity is 2
	I0318 12:44:52.499969 1125718 node_conditions.go:105] duration metric: took 175.457735ms to run NodePressure ...
	I0318 12:44:52.499989 1125718 start.go:240] waiting for startup goroutines ...
	I0318 12:44:52.500025 1125718 start.go:254] writing updated cluster config ...
	I0318 12:44:52.502267 1125718 out.go:177] 
	I0318 12:44:52.503771 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:44:52.503869 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:44:52.505439 1125718 out.go:177] * Starting "ha-328109-m03" control-plane node in "ha-328109" cluster
	I0318 12:44:52.506742 1125718 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:44:52.506765 1125718 cache.go:56] Caching tarball of preloaded images
	I0318 12:44:52.506870 1125718 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 12:44:52.506882 1125718 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 12:44:52.506968 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:44:52.507127 1125718 start.go:360] acquireMachinesLock for ha-328109-m03: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:44:52.507168 1125718 start.go:364] duration metric: took 21.296µs to acquireMachinesLock for "ha-328109-m03"
	I0318 12:44:52.507184 1125718 start.go:93] Provisioning new machine with config: &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:44:52.507271 1125718 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0318 12:44:52.508878 1125718 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 12:44:52.508973 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:44:52.509008 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:44:52.525328 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40131
	I0318 12:44:52.525842 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:44:52.526480 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:44:52.526510 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:44:52.526929 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:44:52.527151 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetMachineName
	I0318 12:44:52.527339 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:44:52.527518 1125718 start.go:159] libmachine.API.Create for "ha-328109" (driver="kvm2")
	I0318 12:44:52.527552 1125718 client.go:168] LocalClient.Create starting
	I0318 12:44:52.527592 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem
	I0318 12:44:52.527631 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:44:52.527653 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:44:52.527725 1125718 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem
	I0318 12:44:52.527753 1125718 main.go:141] libmachine: Decoding PEM data...
	I0318 12:44:52.527772 1125718 main.go:141] libmachine: Parsing certificate...
	I0318 12:44:52.527802 1125718 main.go:141] libmachine: Running pre-create checks...
	I0318 12:44:52.527817 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .PreCreateCheck
	I0318 12:44:52.528040 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetConfigRaw
	I0318 12:44:52.528442 1125718 main.go:141] libmachine: Creating machine...
	I0318 12:44:52.528459 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .Create
	I0318 12:44:52.528614 1125718 main.go:141] libmachine: (ha-328109-m03) Creating KVM machine...
	I0318 12:44:52.529834 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found existing default KVM network
	I0318 12:44:52.529984 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found existing private KVM network mk-ha-328109
	I0318 12:44:52.530979 1125718 main.go:141] libmachine: (ha-328109-m03) Setting up store path in /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03 ...
	I0318 12:44:52.531009 1125718 main.go:141] libmachine: (ha-328109-m03) Building disk image from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 12:44:52.531077 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:52.530943 1126406 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:44:52.531179 1125718 main.go:141] libmachine: (ha-328109-m03) Downloading /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 12:44:52.803112 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:52.802962 1126406 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa...
	I0318 12:44:52.948668 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:52.948527 1126406 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/ha-328109-m03.rawdisk...
	I0318 12:44:52.948713 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Writing magic tar header
	I0318 12:44:52.948731 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Writing SSH key tar header
	I0318 12:44:52.948750 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:52.948711 1126406 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03 ...
	I0318 12:44:52.948911 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03
	I0318 12:44:52.948940 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines
	I0318 12:44:52.948951 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03 (perms=drwx------)
	I0318 12:44:52.948961 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines (perms=drwxr-xr-x)
	I0318 12:44:52.948971 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube (perms=drwxr-xr-x)
	I0318 12:44:52.948999 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816 (perms=drwxrwxr-x)
	I0318 12:44:52.949008 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 12:44:52.949020 1125718 main.go:141] libmachine: (ha-328109-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 12:44:52.949040 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:44:52.949052 1125718 main.go:141] libmachine: (ha-328109-m03) Creating domain...
	I0318 12:44:52.949071 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816
	I0318 12:44:52.949083 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 12:44:52.949091 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home/jenkins
	I0318 12:44:52.949098 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Checking permissions on dir: /home
	I0318 12:44:52.949106 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Skipping /home - not owner
	I0318 12:44:52.950104 1125718 main.go:141] libmachine: (ha-328109-m03) define libvirt domain using xml: 
	I0318 12:44:52.950124 1125718 main.go:141] libmachine: (ha-328109-m03) <domain type='kvm'>
	I0318 12:44:52.950135 1125718 main.go:141] libmachine: (ha-328109-m03)   <name>ha-328109-m03</name>
	I0318 12:44:52.950143 1125718 main.go:141] libmachine: (ha-328109-m03)   <memory unit='MiB'>2200</memory>
	I0318 12:44:52.950164 1125718 main.go:141] libmachine: (ha-328109-m03)   <vcpu>2</vcpu>
	I0318 12:44:52.950179 1125718 main.go:141] libmachine: (ha-328109-m03)   <features>
	I0318 12:44:52.950185 1125718 main.go:141] libmachine: (ha-328109-m03)     <acpi/>
	I0318 12:44:52.950190 1125718 main.go:141] libmachine: (ha-328109-m03)     <apic/>
	I0318 12:44:52.950198 1125718 main.go:141] libmachine: (ha-328109-m03)     <pae/>
	I0318 12:44:52.950202 1125718 main.go:141] libmachine: (ha-328109-m03)     
	I0318 12:44:52.950210 1125718 main.go:141] libmachine: (ha-328109-m03)   </features>
	I0318 12:44:52.950216 1125718 main.go:141] libmachine: (ha-328109-m03)   <cpu mode='host-passthrough'>
	I0318 12:44:52.950223 1125718 main.go:141] libmachine: (ha-328109-m03)   
	I0318 12:44:52.950228 1125718 main.go:141] libmachine: (ha-328109-m03)   </cpu>
	I0318 12:44:52.950240 1125718 main.go:141] libmachine: (ha-328109-m03)   <os>
	I0318 12:44:52.950254 1125718 main.go:141] libmachine: (ha-328109-m03)     <type>hvm</type>
	I0318 12:44:52.950267 1125718 main.go:141] libmachine: (ha-328109-m03)     <boot dev='cdrom'/>
	I0318 12:44:52.950277 1125718 main.go:141] libmachine: (ha-328109-m03)     <boot dev='hd'/>
	I0318 12:44:52.950287 1125718 main.go:141] libmachine: (ha-328109-m03)     <bootmenu enable='no'/>
	I0318 12:44:52.950302 1125718 main.go:141] libmachine: (ha-328109-m03)   </os>
	I0318 12:44:52.950320 1125718 main.go:141] libmachine: (ha-328109-m03)   <devices>
	I0318 12:44:52.950335 1125718 main.go:141] libmachine: (ha-328109-m03)     <disk type='file' device='cdrom'>
	I0318 12:44:52.950361 1125718 main.go:141] libmachine: (ha-328109-m03)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/boot2docker.iso'/>
	I0318 12:44:52.950374 1125718 main.go:141] libmachine: (ha-328109-m03)       <target dev='hdc' bus='scsi'/>
	I0318 12:44:52.950385 1125718 main.go:141] libmachine: (ha-328109-m03)       <readonly/>
	I0318 12:44:52.950394 1125718 main.go:141] libmachine: (ha-328109-m03)     </disk>
	I0318 12:44:52.950404 1125718 main.go:141] libmachine: (ha-328109-m03)     <disk type='file' device='disk'>
	I0318 12:44:52.950421 1125718 main.go:141] libmachine: (ha-328109-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 12:44:52.950438 1125718 main.go:141] libmachine: (ha-328109-m03)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/ha-328109-m03.rawdisk'/>
	I0318 12:44:52.950450 1125718 main.go:141] libmachine: (ha-328109-m03)       <target dev='hda' bus='virtio'/>
	I0318 12:44:52.950459 1125718 main.go:141] libmachine: (ha-328109-m03)     </disk>
	I0318 12:44:52.950470 1125718 main.go:141] libmachine: (ha-328109-m03)     <interface type='network'>
	I0318 12:44:52.950480 1125718 main.go:141] libmachine: (ha-328109-m03)       <source network='mk-ha-328109'/>
	I0318 12:44:52.950490 1125718 main.go:141] libmachine: (ha-328109-m03)       <model type='virtio'/>
	I0318 12:44:52.950499 1125718 main.go:141] libmachine: (ha-328109-m03)     </interface>
	I0318 12:44:52.950511 1125718 main.go:141] libmachine: (ha-328109-m03)     <interface type='network'>
	I0318 12:44:52.950522 1125718 main.go:141] libmachine: (ha-328109-m03)       <source network='default'/>
	I0318 12:44:52.950537 1125718 main.go:141] libmachine: (ha-328109-m03)       <model type='virtio'/>
	I0318 12:44:52.950549 1125718 main.go:141] libmachine: (ha-328109-m03)     </interface>
	I0318 12:44:52.950560 1125718 main.go:141] libmachine: (ha-328109-m03)     <serial type='pty'>
	I0318 12:44:52.950571 1125718 main.go:141] libmachine: (ha-328109-m03)       <target port='0'/>
	I0318 12:44:52.950581 1125718 main.go:141] libmachine: (ha-328109-m03)     </serial>
	I0318 12:44:52.950593 1125718 main.go:141] libmachine: (ha-328109-m03)     <console type='pty'>
	I0318 12:44:52.950604 1125718 main.go:141] libmachine: (ha-328109-m03)       <target type='serial' port='0'/>
	I0318 12:44:52.950614 1125718 main.go:141] libmachine: (ha-328109-m03)     </console>
	I0318 12:44:52.950627 1125718 main.go:141] libmachine: (ha-328109-m03)     <rng model='virtio'>
	I0318 12:44:52.950640 1125718 main.go:141] libmachine: (ha-328109-m03)       <backend model='random'>/dev/random</backend>
	I0318 12:44:52.950649 1125718 main.go:141] libmachine: (ha-328109-m03)     </rng>
	I0318 12:44:52.950659 1125718 main.go:141] libmachine: (ha-328109-m03)     
	I0318 12:44:52.950667 1125718 main.go:141] libmachine: (ha-328109-m03)     
	I0318 12:44:52.950677 1125718 main.go:141] libmachine: (ha-328109-m03)   </devices>
	I0318 12:44:52.950688 1125718 main.go:141] libmachine: (ha-328109-m03) </domain>
	I0318 12:44:52.950700 1125718 main.go:141] libmachine: (ha-328109-m03) 
	I0318 12:44:52.957471 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:93:6f:6c in network default
	I0318 12:44:52.958076 1125718 main.go:141] libmachine: (ha-328109-m03) Ensuring networks are active...
	I0318 12:44:52.958099 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:52.958820 1125718 main.go:141] libmachine: (ha-328109-m03) Ensuring network default is active
	I0318 12:44:52.959163 1125718 main.go:141] libmachine: (ha-328109-m03) Ensuring network mk-ha-328109 is active
	I0318 12:44:52.959518 1125718 main.go:141] libmachine: (ha-328109-m03) Getting domain xml...
	I0318 12:44:52.960231 1125718 main.go:141] libmachine: (ha-328109-m03) Creating domain...
	I0318 12:44:54.207064 1125718 main.go:141] libmachine: (ha-328109-m03) Waiting to get IP...
	I0318 12:44:54.208551 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:54.209498 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:54.209537 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:54.209471 1126406 retry.go:31] will retry after 246.112418ms: waiting for machine to come up
	I0318 12:44:54.457148 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:54.457868 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:54.457935 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:54.457750 1126406 retry.go:31] will retry after 279.428831ms: waiting for machine to come up
	I0318 12:44:54.739458 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:54.739925 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:54.739957 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:54.739895 1126406 retry.go:31] will retry after 436.062724ms: waiting for machine to come up
	I0318 12:44:55.177575 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:55.178132 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:55.178163 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:55.178078 1126406 retry.go:31] will retry after 490.275413ms: waiting for machine to come up
	I0318 12:44:55.669861 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:55.670424 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:55.670460 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:55.670369 1126406 retry.go:31] will retry after 633.010114ms: waiting for machine to come up
	I0318 12:44:56.304966 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:56.305467 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:56.305492 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:56.305431 1126406 retry.go:31] will retry after 889.156096ms: waiting for machine to come up
	I0318 12:44:57.196816 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:57.197381 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:57.197415 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:57.197350 1126406 retry.go:31] will retry after 1.013553214s: waiting for machine to come up
	I0318 12:44:58.212914 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:58.213383 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:58.213413 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:58.213336 1126406 retry.go:31] will retry after 1.302275369s: waiting for machine to come up
	I0318 12:44:59.517671 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:44:59.518056 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:44:59.518089 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:44:59.518002 1126406 retry.go:31] will retry after 1.691239088s: waiting for machine to come up
	I0318 12:45:01.211342 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:01.211830 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:45:01.211855 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:45:01.211795 1126406 retry.go:31] will retry after 1.472197751s: waiting for machine to come up
	I0318 12:45:02.686158 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:02.686681 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:45:02.686712 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:45:02.686653 1126406 retry.go:31] will retry after 2.792712555s: waiting for machine to come up
	I0318 12:45:05.481952 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:05.482411 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:45:05.482466 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:45:05.482381 1126406 retry.go:31] will retry after 3.275189677s: waiting for machine to come up
	I0318 12:45:08.758986 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:08.759372 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:45:08.759404 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:45:08.759316 1126406 retry.go:31] will retry after 4.535450098s: waiting for machine to come up
	I0318 12:45:13.296855 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:13.297384 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find current IP address of domain ha-328109-m03 in network mk-ha-328109
	I0318 12:45:13.297410 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | I0318 12:45:13.297328 1126406 retry.go:31] will retry after 3.801826868s: waiting for machine to come up
	I0318 12:45:17.101660 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.102181 1125718 main.go:141] libmachine: (ha-328109-m03) Found IP for machine: 192.168.39.241
	I0318 12:45:17.102212 1125718 main.go:141] libmachine: (ha-328109-m03) Reserving static IP address...
	I0318 12:45:17.102227 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has current primary IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.102652 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | unable to find host DHCP lease matching {name: "ha-328109-m03", mac: "52:54:00:13:6e:ac", ip: "192.168.39.241"} in network mk-ha-328109
	I0318 12:45:17.177177 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Getting to WaitForSSH function...
	I0318 12:45:17.177210 1125718 main.go:141] libmachine: (ha-328109-m03) Reserved static IP address: 192.168.39.241
	I0318 12:45:17.177225 1125718 main.go:141] libmachine: (ha-328109-m03) Waiting for SSH to be available...
	I0318 12:45:17.180030 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.180526 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:minikube Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.180567 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.180681 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Using SSH client type: external
	I0318 12:45:17.180719 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa (-rw-------)
	I0318 12:45:17.180767 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.241 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 12:45:17.180791 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | About to run SSH command:
	I0318 12:45:17.180840 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | exit 0
	I0318 12:45:17.308873 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | SSH cmd err, output: <nil>: 
	I0318 12:45:17.309166 1125718 main.go:141] libmachine: (ha-328109-m03) KVM machine creation complete!
	I0318 12:45:17.309498 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetConfigRaw
	I0318 12:45:17.310106 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:17.310336 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:17.310540 1125718 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 12:45:17.310554 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:45:17.311931 1125718 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 12:45:17.311946 1125718 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 12:45:17.311951 1125718 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 12:45:17.311957 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.314381 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.314805 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.314845 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.315020 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:17.315191 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.315352 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.315515 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:17.315726 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:17.315998 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:17.316011 1125718 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 12:45:17.423835 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:45:17.423881 1125718 main.go:141] libmachine: Detecting the provisioner...
	I0318 12:45:17.423892 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.427406 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.427915 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.427949 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.428139 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:17.428410 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.428605 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.428778 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:17.429011 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:17.429256 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:17.429276 1125718 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 12:45:17.541758 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 12:45:17.541850 1125718 main.go:141] libmachine: found compatible host: buildroot
	I0318 12:45:17.541864 1125718 main.go:141] libmachine: Provisioning with buildroot...
	I0318 12:45:17.541875 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetMachineName
	I0318 12:45:17.542163 1125718 buildroot.go:166] provisioning hostname "ha-328109-m03"
	I0318 12:45:17.542193 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetMachineName
	I0318 12:45:17.542411 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.545194 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.545676 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.545702 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.545843 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:17.546009 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.546212 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.546398 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:17.546645 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:17.546862 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:17.546880 1125718 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-328109-m03 && echo "ha-328109-m03" | sudo tee /etc/hostname
	I0318 12:45:17.672890 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-328109-m03
	
	I0318 12:45:17.672925 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.675623 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.676056 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.676081 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.676336 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:17.676540 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.676738 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.676879 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:17.677040 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:17.677242 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:17.677260 1125718 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-328109-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-328109-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-328109-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:45:17.801256 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:45:17.801294 1125718 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 12:45:17.801317 1125718 buildroot.go:174] setting up certificates
	I0318 12:45:17.801332 1125718 provision.go:84] configureAuth start
	I0318 12:45:17.801344 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetMachineName
	I0318 12:45:17.801667 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:45:17.804353 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.804704 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.804738 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.804921 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.807223 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.807552 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.807582 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.807692 1125718 provision.go:143] copyHostCerts
	I0318 12:45:17.807730 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:45:17.807775 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 12:45:17.807799 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:45:17.807894 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 12:45:17.808000 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:45:17.808026 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 12:45:17.808033 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:45:17.808077 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 12:45:17.808158 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:45:17.808182 1125718 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 12:45:17.808188 1125718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:45:17.808225 1125718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 12:45:17.808313 1125718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.ha-328109-m03 san=[127.0.0.1 192.168.39.241 ha-328109-m03 localhost minikube]
	I0318 12:45:17.968101 1125718 provision.go:177] copyRemoteCerts
	I0318 12:45:17.968179 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:45:17.968215 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:17.970992 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.971328 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:17.971365 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:17.971544 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:17.971748 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:17.971875 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:17.972027 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:45:18.059601 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 12:45:18.059684 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:45:18.090751 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 12:45:18.090826 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 12:45:18.118403 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 12:45:18.118481 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 12:45:18.147188 1125718 provision.go:87] duration metric: took 345.837123ms to configureAuth
	I0318 12:45:18.147232 1125718 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:45:18.147476 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:45:18.147562 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:18.150390 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.150771 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.150810 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.150989 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:18.151216 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.151402 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.151589 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:18.151753 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:18.151946 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:18.151961 1125718 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 12:45:18.457910 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 12:45:18.457945 1125718 main.go:141] libmachine: Checking connection to Docker...
	I0318 12:45:18.457956 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetURL
	I0318 12:45:18.459537 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | Using libvirt version 6000000
	I0318 12:45:18.462170 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.462545 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.462574 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.462861 1125718 main.go:141] libmachine: Docker is up and running!
	I0318 12:45:18.462877 1125718 main.go:141] libmachine: Reticulating splines...
	I0318 12:45:18.462884 1125718 client.go:171] duration metric: took 25.935321178s to LocalClient.Create
	I0318 12:45:18.462909 1125718 start.go:167] duration metric: took 25.935392452s to libmachine.API.Create "ha-328109"
	I0318 12:45:18.462919 1125718 start.go:293] postStartSetup for "ha-328109-m03" (driver="kvm2")
	I0318 12:45:18.462930 1125718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:45:18.462947 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:18.463202 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:45:18.463233 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:18.465465 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.465803 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.465829 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.465977 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:18.466171 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.466322 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:18.466492 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:45:18.552562 1125718 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:45:18.557953 1125718 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:45:18.557984 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 12:45:18.558062 1125718 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 12:45:18.558151 1125718 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 12:45:18.558163 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /etc/ssl/certs/11141362.pem
	I0318 12:45:18.558279 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:45:18.568566 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:45:18.599564 1125718 start.go:296] duration metric: took 136.628629ms for postStartSetup
	I0318 12:45:18.599636 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetConfigRaw
	I0318 12:45:18.600236 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:45:18.603196 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.603548 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.603593 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.603857 1125718 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:45:18.604103 1125718 start.go:128] duration metric: took 26.096819646s to createHost
	I0318 12:45:18.604129 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:18.606491 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.606891 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.606919 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.607116 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:18.607296 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.607508 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.607684 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:18.607898 1125718 main.go:141] libmachine: Using SSH client type: native
	I0318 12:45:18.608081 1125718 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.241 22 <nil> <nil>}
	I0318 12:45:18.608095 1125718 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:45:18.722293 1125718 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710765918.693436013
	
	I0318 12:45:18.722318 1125718 fix.go:216] guest clock: 1710765918.693436013
	I0318 12:45:18.722326 1125718 fix.go:229] Guest: 2024-03-18 12:45:18.693436013 +0000 UTC Remote: 2024-03-18 12:45:18.604118512 +0000 UTC m=+165.760798563 (delta=89.317501ms)
	I0318 12:45:18.722343 1125718 fix.go:200] guest clock delta is within tolerance: 89.317501ms
	I0318 12:45:18.722348 1125718 start.go:83] releasing machines lock for "ha-328109-m03", held for 26.21517349s
	I0318 12:45:18.722373 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:18.722708 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:45:18.725969 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.726353 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.726379 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.728674 1125718 out.go:177] * Found network options:
	I0318 12:45:18.730130 1125718 out.go:177]   - NO_PROXY=192.168.39.253,192.168.39.246
	W0318 12:45:18.731523 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 12:45:18.731550 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:45:18.731569 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:18.732113 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:18.732354 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:45:18.732468 1125718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:45:18.732501 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	W0318 12:45:18.732540 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	W0318 12:45:18.732564 1125718 proxy.go:119] fail to check proxy env: Error ip not in block
	I0318 12:45:18.732633 1125718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 12:45:18.732655 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:45:18.735374 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.735399 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.735796 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.735826 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:18.735847 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.735912 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:18.736013 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:18.736169 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:45:18.736245 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.736374 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:45:18.736394 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:18.736548 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:45:18.736564 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:45:18.736656 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:45:18.990045 1125718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 12:45:18.997609 1125718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:45:18.997696 1125718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:45:19.016284 1125718 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 12:45:19.016317 1125718 start.go:494] detecting cgroup driver to use...
	I0318 12:45:19.016414 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:45:19.036959 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:45:19.052702 1125718 docker.go:217] disabling cri-docker service (if available) ...
	I0318 12:45:19.052763 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 12:45:19.068812 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 12:45:19.083885 1125718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 12:45:19.219762 1125718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 12:45:19.375140 1125718 docker.go:233] disabling docker service ...
	I0318 12:45:19.375218 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 12:45:19.391700 1125718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 12:45:19.408089 1125718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 12:45:19.568781 1125718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 12:45:19.698388 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 12:45:19.715205 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:45:19.737848 1125718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 12:45:19.737915 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:19.751205 1125718 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 12:45:19.751291 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:19.764038 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:19.776823 1125718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:45:19.789620 1125718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:45:19.802402 1125718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:45:19.814327 1125718 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 12:45:19.814391 1125718 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 12:45:19.830755 1125718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:45:19.842732 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:45:19.990158 1125718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 12:45:20.152548 1125718 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 12:45:20.152643 1125718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 12:45:20.158364 1125718 start.go:562] Will wait 60s for crictl version
	I0318 12:45:20.158447 1125718 ssh_runner.go:195] Run: which crictl
	I0318 12:45:20.163229 1125718 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:45:20.206997 1125718 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 12:45:20.207092 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:45:20.237899 1125718 ssh_runner.go:195] Run: crio --version
	I0318 12:45:20.272643 1125718 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 12:45:20.273996 1125718 out.go:177]   - env NO_PROXY=192.168.39.253
	I0318 12:45:20.275201 1125718 out.go:177]   - env NO_PROXY=192.168.39.253,192.168.39.246
	I0318 12:45:20.276497 1125718 main.go:141] libmachine: (ha-328109-m03) Calling .GetIP
	I0318 12:45:20.279284 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:20.279647 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:45:20.279682 1125718 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:45:20.279940 1125718 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 12:45:20.285192 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:45:20.300521 1125718 mustload.go:65] Loading cluster: ha-328109
	I0318 12:45:20.300759 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:45:20.301216 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:45:20.301264 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:45:20.317712 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39767
	I0318 12:45:20.318296 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:45:20.318799 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:45:20.318825 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:45:20.319152 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:45:20.319346 1125718 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:45:20.320937 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:45:20.321271 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:45:20.321318 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:45:20.335906 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34807
	I0318 12:45:20.336389 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:45:20.336872 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:45:20.336893 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:45:20.337221 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:45:20.337425 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:45:20.337587 1125718 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109 for IP: 192.168.39.241
	I0318 12:45:20.337600 1125718 certs.go:194] generating shared ca certs ...
	I0318 12:45:20.337616 1125718 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:20.337745 1125718 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 12:45:20.337792 1125718 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 12:45:20.337801 1125718 certs.go:256] generating profile certs ...
	I0318 12:45:20.337915 1125718 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key
	I0318 12:45:20.337951 1125718 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.1e9447cf
	I0318 12:45:20.337968 1125718 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.1e9447cf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.253 192.168.39.246 192.168.39.241 192.168.39.254]
	I0318 12:45:20.529819 1125718 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.1e9447cf ...
	I0318 12:45:20.529854 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.1e9447cf: {Name:mk0c3c37f6163a623e76fa06f4a7e365e62d341b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:20.530058 1125718 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.1e9447cf ...
	I0318 12:45:20.530078 1125718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.1e9447cf: {Name:mk6476b5a8deedc75938b726c0d94d4f542498da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:45:20.530178 1125718 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.1e9447cf -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt
	I0318 12:45:20.530328 1125718 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.1e9447cf -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key
	I0318 12:45:20.530512 1125718 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key
	I0318 12:45:20.530533 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:45:20.530555 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:45:20.530573 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:45:20.530590 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:45:20.530607 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:45:20.530622 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:45:20.530639 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:45:20.530656 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:45:20.530720 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 12:45:20.530760 1125718 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 12:45:20.530774 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 12:45:20.530809 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 12:45:20.530838 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 12:45:20.530866 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 12:45:20.530919 1125718 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:45:20.530954 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem -> /usr/share/ca-certificates/1114136.pem
	I0318 12:45:20.530976 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /usr/share/ca-certificates/11141362.pem
	I0318 12:45:20.530994 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:20.531037 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:45:20.534286 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:45:20.534750 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:45:20.534777 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:45:20.534944 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:45:20.535168 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:45:20.535341 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:45:20.535486 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:45:20.612719 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0318 12:45:20.619208 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0318 12:45:20.635267 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0318 12:45:20.640381 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0318 12:45:20.655052 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0318 12:45:20.659704 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0318 12:45:20.672182 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0318 12:45:20.676767 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0318 12:45:20.689124 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0318 12:45:20.693592 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0318 12:45:20.705537 1125718 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0318 12:45:20.710335 1125718 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0318 12:45:20.723667 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:45:20.752170 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:45:20.779440 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:45:20.807077 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:45:20.835454 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0318 12:45:20.865041 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 12:45:20.894868 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:45:20.921846 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 12:45:20.949792 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 12:45:20.976855 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 12:45:21.004675 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:45:21.031367 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0318 12:45:21.050437 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0318 12:45:21.069848 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0318 12:45:21.089292 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0318 12:45:21.108785 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0318 12:45:21.129862 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0318 12:45:21.150726 1125718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0318 12:45:21.171268 1125718 ssh_runner.go:195] Run: openssl version
	I0318 12:45:21.177884 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 12:45:21.190013 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 12:45:21.195104 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 12:45:21.195164 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 12:45:21.201374 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:45:21.214520 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:45:21.227156 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:21.232263 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:21.232344 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:45:21.238733 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:45:21.253325 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 12:45:21.266067 1125718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 12:45:21.270989 1125718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 12:45:21.271054 1125718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 12:45:21.277455 1125718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 12:45:21.290385 1125718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:45:21.295157 1125718 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 12:45:21.295210 1125718 kubeadm.go:928] updating node {m03 192.168.39.241 8443 v1.28.4 crio true true} ...
	I0318 12:45:21.295303 1125718 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-328109-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.241
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:45:21.295347 1125718 kube-vip.go:111] generating kube-vip config ...
	I0318 12:45:21.295406 1125718 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 12:45:21.314331 1125718 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 12:45:21.314409 1125718 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 12:45:21.314468 1125718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:45:21.326579 1125718 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0318 12:45:21.326640 1125718 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0318 12:45:21.338387 1125718 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0318 12:45:21.338419 1125718 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0318 12:45:21.338431 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:45:21.338443 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:45:21.338387 1125718 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0318 12:45:21.338515 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0318 12:45:21.338525 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:45:21.338517 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0318 12:45:21.349806 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0318 12:45:21.349837 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0318 12:45:21.366555 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0318 12:45:21.366598 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0318 12:45:21.374524 1125718 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:45:21.374679 1125718 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0318 12:45:21.444178 1125718 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0318 12:45:21.444229 1125718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0318 12:45:22.371248 1125718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0318 12:45:22.383173 1125718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0318 12:45:22.402507 1125718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:45:22.425078 1125718 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 12:45:22.445650 1125718 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 12:45:22.450703 1125718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 12:45:22.467786 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:45:22.614349 1125718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:45:22.638106 1125718 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:45:22.638499 1125718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:45:22.638546 1125718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:45:22.657862 1125718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35309
	I0318 12:45:22.658327 1125718 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:45:22.658989 1125718 main.go:141] libmachine: Using API Version  1
	I0318 12:45:22.659017 1125718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:45:22.659440 1125718 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:45:22.659667 1125718 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:45:22.659850 1125718 start.go:316] joinCluster: &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:45:22.660004 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0318 12:45:22.660033 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:45:22.663173 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:45:22.663690 1125718 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:45:22.663720 1125718 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:45:22.663838 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:45:22.663984 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:45:22.664180 1125718 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:45:22.664390 1125718 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:45:22.838804 1125718 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:45:22.838876 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gxp8r6.mdkqjq2zkbxrcymg --discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-328109-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443"
	I0318 12:45:50.731898 1125718 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token gxp8r6.mdkqjq2zkbxrcymg --discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-328109-m03 --control-plane --apiserver-advertise-address=192.168.39.241 --apiserver-bind-port=8443": (27.892978911s)
	I0318 12:45:50.731948 1125718 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0318 12:45:51.211696 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-328109-m03 minikube.k8s.io/updated_at=2024_03_18T12_45_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=ha-328109 minikube.k8s.io/primary=false
	I0318 12:45:51.347766 1125718 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-328109-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0318 12:45:51.629097 1125718 start.go:318] duration metric: took 28.9692463s to joinCluster
	I0318 12:45:51.629188 1125718 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 12:45:51.630896 1125718 out.go:177] * Verifying Kubernetes components...
	I0318 12:45:51.629591 1125718 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:45:51.632402 1125718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:45:51.867512 1125718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:45:51.892575 1125718 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:45:51.892862 1125718 kapi.go:59] client config for ha-328109: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.crt", KeyFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key", CAFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0318 12:45:51.892946 1125718 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.253:8443
	I0318 12:45:51.893360 1125718 node_ready.go:35] waiting up to 6m0s for node "ha-328109-m03" to be "Ready" ...
	I0318 12:45:51.893468 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:51.893480 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:51.893491 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:51.893501 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:51.898804 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:45:52.393567 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:52.393593 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:52.393603 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:52.393610 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:52.401730 1125718 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:45:52.894375 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:52.896002 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:52.896018 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:52.896025 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:52.900164 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:53.393753 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:53.393780 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:53.393792 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:53.393797 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:53.398510 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:53.893965 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:53.893987 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:53.893994 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:53.893998 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:53.898790 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:53.899477 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:45:54.393821 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:54.393847 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:54.393859 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:54.393863 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:54.398316 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:54.894017 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:54.894048 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:54.894060 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:54.894077 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:54.899430 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:45:55.394451 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:55.394483 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:55.394496 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:55.394503 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:55.398830 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:55.893821 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:55.893848 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:55.893857 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:55.893862 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:55.909745 1125718 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0318 12:45:55.911477 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:45:56.393669 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:56.393693 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:56.393704 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:56.393709 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:56.398173 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:56.894569 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:56.894591 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:56.894599 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:56.894602 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:56.900110 1125718 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0318 12:45:57.394317 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:57.394342 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:57.394351 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:57.394359 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:57.397886 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:45:57.894606 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:57.895060 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:57.895074 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:57.895079 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:57.898995 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:45:58.393654 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:58.393683 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:58.393696 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:58.393703 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:58.397159 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:45:58.397868 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:45:58.893883 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:58.893910 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:58.893922 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:58.893928 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:58.902293 1125718 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:45:59.394041 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:59.394062 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:59.394068 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:59.394071 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:59.398132 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:45:59.893979 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:45:59.894003 1125718 round_trippers.go:469] Request Headers:
	I0318 12:45:59.894014 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:45:59.894021 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:45:59.897460 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:00.394115 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:00.394138 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:00.394147 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:00.394151 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:00.398083 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:00.399038 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:00.894443 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:00.894467 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:00.894475 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:00.894479 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:00.899326 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:01.393631 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:01.393654 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:01.393663 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:01.393667 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:01.398456 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:01.893764 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:01.893787 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:01.893795 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:01.893799 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:01.897615 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:02.393625 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:02.393659 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:02.393671 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:02.393677 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:02.397586 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:02.893876 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:02.895638 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:02.895653 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:02.895658 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:02.899810 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:02.900842 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:03.393710 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:03.393732 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:03.393740 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:03.393746 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:03.397970 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:03.894001 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:03.894027 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:03.894035 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:03.894039 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:03.897793 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:04.393830 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:04.393853 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:04.393861 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:04.393865 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:04.398651 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:04.894253 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:04.894278 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:04.894289 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:04.894294 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:04.898997 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:05.393698 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:05.393720 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:05.393729 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:05.393733 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:05.397991 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:05.399145 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:05.894516 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:05.894602 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:05.894620 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:05.894628 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:05.899519 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:06.393596 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:06.393624 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:06.393632 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:06.393637 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:06.397271 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:06.894444 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:06.894481 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:06.894492 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:06.894498 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:06.897984 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:07.394041 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:07.394066 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:07.394078 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:07.394083 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:07.398311 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:07.399243 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:07.894009 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:07.895586 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:07.895603 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:07.895609 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:07.899914 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:08.394480 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:08.394513 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:08.394524 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:08.394530 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:08.398758 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:08.893705 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:08.893731 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:08.893739 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:08.893744 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:08.897546 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:09.393604 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:09.393628 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:09.393667 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:09.393675 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:09.397095 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:09.894001 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:09.894026 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:09.894034 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:09.894039 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:09.897285 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:09.897967 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:10.393902 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:10.393925 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:10.393933 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:10.393939 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:10.398063 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:10.894302 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:10.894330 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:10.894341 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:10.894348 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:10.899069 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:11.393653 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:11.393682 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:11.393692 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:11.393697 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:11.405400 1125718 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0318 12:46:11.894092 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:11.894120 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:11.894132 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:11.894139 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:11.898751 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:11.899585 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:12.394474 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:12.394505 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:12.394518 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:12.394522 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:12.398293 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:12.894092 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:12.895791 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:12.895807 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:12.895813 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:12.900385 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:13.394215 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:13.394242 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:13.394252 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:13.394257 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:13.398059 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:13.893948 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:13.893976 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:13.893988 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:13.893994 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:13.898230 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:14.394458 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:14.394486 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:14.394495 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:14.394499 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:14.398713 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:14.400120 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:14.894544 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:14.894573 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:14.894586 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:14.894596 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:14.898948 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:15.394456 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:15.394483 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:15.394491 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:15.394495 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:15.398152 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:15.894240 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:15.894265 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:15.894273 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:15.894279 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:15.898224 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:16.394267 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:16.394293 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:16.394305 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:16.394312 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:16.398395 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:16.893788 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:16.893811 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:16.893819 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:16.893823 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:16.897947 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:16.898594 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:17.393570 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:17.393596 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:17.393608 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:17.393614 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:17.398598 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:17.894271 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:17.895991 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:17.896007 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:17.896012 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:17.900447 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:18.394084 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:18.394116 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:18.394125 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:18.394131 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:18.398039 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:18.894059 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:18.894083 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:18.894091 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:18.894096 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:18.898700 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:18.899363 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:19.393728 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:19.393752 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:19.393761 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:19.393765 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:19.397222 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:19.894334 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:19.894357 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:19.894363 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:19.894368 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:19.898082 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:20.393856 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:20.393886 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:20.393897 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:20.393902 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:20.397943 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:20.894483 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:20.894507 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:20.894515 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:20.894520 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:20.898814 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:20.899704 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:21.394169 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:21.394202 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:21.394223 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:21.394230 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:21.397956 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:21.893651 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:21.893677 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:21.893694 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:21.893698 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:21.898640 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:22.394576 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:22.394601 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:22.394608 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:22.394613 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:22.398389 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:22.894094 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:22.896048 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:22.896066 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:22.896071 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:22.900319 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:22.901080 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:23.393859 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:23.393884 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:23.393891 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:23.393895 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:23.397675 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:23.893653 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:23.893677 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:23.893686 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:23.893691 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:23.897607 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:24.393577 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:24.393603 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:24.393613 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:24.393617 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:24.397739 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:24.894603 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:24.894630 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:24.894642 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:24.894648 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:24.902724 1125718 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:46:24.903590 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:25.393878 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:25.393900 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:25.393909 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:25.393915 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:25.397677 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:25.893587 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:25.893611 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:25.893620 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:25.893624 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:25.899888 1125718 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0318 12:46:26.394601 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:26.394631 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:26.394642 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:26.394646 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:26.398580 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:26.893547 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:26.893575 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:26.893588 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:26.893595 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:26.897036 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:27.394252 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:27.394276 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.394285 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.394290 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.397689 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:27.398510 1125718 node_ready.go:53] node "ha-328109-m03" has status "Ready":"False"
	I0318 12:46:27.894262 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:27.895985 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.896001 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.896006 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.900414 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:27.901027 1125718 node_ready.go:49] node "ha-328109-m03" has status "Ready":"True"
	I0318 12:46:27.901046 1125718 node_ready.go:38] duration metric: took 36.007666077s for node "ha-328109-m03" to be "Ready" ...
	I0318 12:46:27.901056 1125718 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:46:27.901124 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:46:27.901136 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.901143 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.901146 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.912020 1125718 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0318 12:46:27.919703 1125718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.919796 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-c78nc
	I0318 12:46:27.919808 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.919815 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.919820 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.924016 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:27.924699 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:27.924718 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.924729 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.924737 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.928045 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:27.928631 1125718 pod_ready.go:92] pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:27.928651 1125718 pod_ready.go:81] duration metric: took 8.921172ms for pod "coredns-5dd5756b68-c78nc" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.928665 1125718 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.928725 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-p5xgj
	I0318 12:46:27.928736 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.928747 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.928757 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.932967 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:27.933505 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:27.933518 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.933524 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.933528 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.936811 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:27.937297 1125718 pod_ready.go:92] pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:27.937316 1125718 pod_ready.go:81] duration metric: took 8.643983ms for pod "coredns-5dd5756b68-p5xgj" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.937329 1125718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.937387 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109
	I0318 12:46:27.937398 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.937408 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.937415 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.940164 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:46:27.940975 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:27.940991 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.940998 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.941002 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.943543 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:46:27.944096 1125718 pod_ready.go:92] pod "etcd-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:27.944112 1125718 pod_ready.go:81] duration metric: took 6.777315ms for pod "etcd-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.944120 1125718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.944174 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m02
	I0318 12:46:27.944184 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.944190 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.944194 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.946826 1125718 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0318 12:46:27.947314 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:27.947331 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:27.947340 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:27.947346 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:27.951688 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:27.952268 1125718 pod_ready.go:92] pod "etcd-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:27.952283 1125718 pod_ready.go:81] duration metric: took 8.158107ms for pod "etcd-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:27.952290 1125718 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:28.094895 1125718 request.go:629] Waited for 142.51346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m03
	I0318 12:46:28.094962 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/etcd-ha-328109-m03
	I0318 12:46:28.094967 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:28.094975 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:28.094979 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:28.098905 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:28.294764 1125718 request.go:629] Waited for 195.314043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:28.294825 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:28.294831 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:28.294839 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:28.294843 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:28.299332 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:28.299969 1125718 pod_ready.go:92] pod "etcd-ha-328109-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:28.299988 1125718 pod_ready.go:81] duration metric: took 347.692389ms for pod "etcd-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:28.300005 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:28.495202 1125718 request.go:629] Waited for 195.124331ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109
	I0318 12:46:28.495289 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109
	I0318 12:46:28.495301 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:28.495311 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:28.495321 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:28.499972 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:28.695142 1125718 request.go:629] Waited for 194.350739ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:28.695234 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:28.695243 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:28.695251 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:28.695256 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:28.699376 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:28.700086 1125718 pod_ready.go:92] pod "kube-apiserver-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:28.700109 1125718 pod_ready.go:81] duration metric: took 400.092781ms for pod "kube-apiserver-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:28.700120 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:28.895212 1125718 request.go:629] Waited for 195.001042ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m02
	I0318 12:46:28.895298 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m02
	I0318 12:46:28.895315 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:28.895331 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:28.895337 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:28.899477 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:29.094775 1125718 request.go:629] Waited for 194.368478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:29.094834 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:29.094839 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:29.094847 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:29.094851 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:29.098849 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:29.099342 1125718 pod_ready.go:92] pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:29.099363 1125718 pod_ready.go:81] duration metric: took 399.232111ms for pod "kube-apiserver-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:29.099377 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:29.294434 1125718 request.go:629] Waited for 194.941557ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m03
	I0318 12:46:29.294512 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-328109-m03
	I0318 12:46:29.294520 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:29.294529 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:29.294534 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:29.298462 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:29.494811 1125718 request.go:629] Waited for 195.366462ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:29.494877 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:29.494884 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:29.494895 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:29.494901 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:29.498913 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:29.499834 1125718 pod_ready.go:92] pod "kube-apiserver-ha-328109-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:29.499862 1125718 pod_ready.go:81] duration metric: took 400.476064ms for pod "kube-apiserver-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:29.499875 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:29.695031 1125718 request.go:629] Waited for 195.062315ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109
	I0318 12:46:29.695124 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109
	I0318 12:46:29.695135 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:29.695146 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:29.695154 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:29.699023 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:29.895315 1125718 request.go:629] Waited for 195.40424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:29.895382 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:29.895388 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:29.895396 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:29.895400 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:29.899461 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:29.900374 1125718 pod_ready.go:92] pod "kube-controller-manager-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:29.900399 1125718 pod_ready.go:81] duration metric: took 400.516458ms for pod "kube-controller-manager-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:29.900409 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:30.094771 1125718 request.go:629] Waited for 194.261987ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m02
	I0318 12:46:30.094857 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m02
	I0318 12:46:30.094868 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:30.094879 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:30.094888 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:30.099027 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:30.295235 1125718 request.go:629] Waited for 195.36728ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:30.295307 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:30.295316 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:30.295332 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:30.295341 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:30.299497 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:30.300286 1125718 pod_ready.go:92] pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:30.300306 1125718 pod_ready.go:81] duration metric: took 399.891002ms for pod "kube-controller-manager-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:30.300317 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:30.495111 1125718 request.go:629] Waited for 194.708476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m03
	I0318 12:46:30.495179 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-328109-m03
	I0318 12:46:30.495184 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:30.495192 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:30.495196 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:30.499703 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:30.694677 1125718 request.go:629] Waited for 194.395787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:30.694767 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:30.694777 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:30.694785 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:30.694792 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:30.698494 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:30.699143 1125718 pod_ready.go:92] pod "kube-controller-manager-ha-328109-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:30.699161 1125718 pod_ready.go:81] duration metric: took 398.835754ms for pod "kube-controller-manager-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:30.699172 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-7zgrx" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:30.894301 1125718 request.go:629] Waited for 195.051197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7zgrx
	I0318 12:46:30.894396 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-7zgrx
	I0318 12:46:30.894404 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:30.894416 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:30.894429 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:30.898413 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:31.094361 1125718 request.go:629] Waited for 195.290418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:31.094447 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:31.094458 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:31.094493 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:31.094506 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:31.098720 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:31.099202 1125718 pod_ready.go:92] pod "kube-proxy-7zgrx" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:31.099224 1125718 pod_ready.go:81] duration metric: took 400.046238ms for pod "kube-proxy-7zgrx" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:31.099234 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-dhz88" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:31.295307 1125718 request.go:629] Waited for 195.990215ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhz88
	I0318 12:46:31.295389 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhz88
	I0318 12:46:31.295397 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:31.295405 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:31.295412 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:31.299881 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:31.495320 1125718 request.go:629] Waited for 194.713319ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:31.495409 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:31.495420 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:31.495432 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:31.495441 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:31.499776 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:31.500391 1125718 pod_ready.go:92] pod "kube-proxy-dhz88" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:31.500416 1125718 pod_ready.go:81] duration metric: took 401.173007ms for pod "kube-proxy-dhz88" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:31.500430 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zn8dk" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:31.694545 1125718 request.go:629] Waited for 194.035364ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zn8dk
	I0318 12:46:31.694641 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-proxy-zn8dk
	I0318 12:46:31.694653 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:31.694666 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:31.694684 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:31.698181 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:31.894615 1125718 request.go:629] Waited for 195.398032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:31.894681 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:31.894686 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:31.894693 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:31.894699 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:31.898750 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:31.899327 1125718 pod_ready.go:92] pod "kube-proxy-zn8dk" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:31.899348 1125718 pod_ready.go:81] duration metric: took 398.910077ms for pod "kube-proxy-zn8dk" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:31.899357 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:32.094508 1125718 request.go:629] Waited for 195.052309ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109
	I0318 12:46:32.094581 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109
	I0318 12:46:32.094587 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:32.094599 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:32.094609 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:32.099402 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:32.294478 1125718 request.go:629] Waited for 194.277594ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:32.294569 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109
	I0318 12:46:32.294576 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:32.294584 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:32.294588 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:32.298282 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:32.298724 1125718 pod_ready.go:92] pod "kube-scheduler-ha-328109" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:32.298742 1125718 pod_ready.go:81] duration metric: took 399.374733ms for pod "kube-scheduler-ha-328109" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:32.298753 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:32.494784 1125718 request.go:629] Waited for 195.934465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m02
	I0318 12:46:32.494886 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m02
	I0318 12:46:32.494897 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:32.494911 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:32.494923 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:32.498685 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:32.694556 1125718 request.go:629] Waited for 195.083041ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:32.694630 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m02
	I0318 12:46:32.694638 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:32.694650 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:32.694666 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:32.698773 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:32.699314 1125718 pod_ready.go:92] pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:32.699335 1125718 pod_ready.go:81] duration metric: took 400.576206ms for pod "kube-scheduler-ha-328109-m02" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:32.699345 1125718 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:32.894300 1125718 request.go:629] Waited for 194.866034ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m03
	I0318 12:46:32.896441 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-328109-m03
	I0318 12:46:32.896457 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:32.896468 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:32.896477 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:32.900486 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:33.094997 1125718 request.go:629] Waited for 193.426779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:33.095085 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes/ha-328109-m03
	I0318 12:46:33.095104 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.095119 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.095140 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.099461 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:33.100162 1125718 pod_ready.go:92] pod "kube-scheduler-ha-328109-m03" in "kube-system" namespace has status "Ready":"True"
	I0318 12:46:33.100187 1125718 pod_ready.go:81] duration metric: took 400.831673ms for pod "kube-scheduler-ha-328109-m03" in "kube-system" namespace to be "Ready" ...
	I0318 12:46:33.100204 1125718 pod_ready.go:38] duration metric: took 5.199137291s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 12:46:33.100234 1125718 api_server.go:52] waiting for apiserver process to appear ...
	I0318 12:46:33.100304 1125718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:46:33.123259 1125718 api_server.go:72] duration metric: took 41.494021932s to wait for apiserver process to appear ...
	I0318 12:46:33.123287 1125718 api_server.go:88] waiting for apiserver healthz status ...
	I0318 12:46:33.123313 1125718 api_server.go:253] Checking apiserver healthz at https://192.168.39.253:8443/healthz ...
	I0318 12:46:33.129660 1125718 api_server.go:279] https://192.168.39.253:8443/healthz returned 200:
	ok
	I0318 12:46:33.129740 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/version
	I0318 12:46:33.129750 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.129761 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.129769 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.137451 1125718 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:46:33.137552 1125718 api_server.go:141] control plane version: v1.28.4
	I0318 12:46:33.137575 1125718 api_server.go:131] duration metric: took 14.279559ms to wait for apiserver health ...
	I0318 12:46:33.137586 1125718 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 12:46:33.294978 1125718 request.go:629] Waited for 157.313775ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:46:33.295062 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:46:33.295068 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.295083 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.295094 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.302686 1125718 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0318 12:46:33.309075 1125718 system_pods.go:59] 24 kube-system pods found
	I0318 12:46:33.309104 1125718 system_pods.go:61] "coredns-5dd5756b68-c78nc" [7c1159dc-6545-41a6-bb4a-75fdab519c9e] Running
	I0318 12:46:33.309109 1125718 system_pods.go:61] "coredns-5dd5756b68-p5xgj" [9a865f86-96cf-4687-9283-d2ebe5616d1a] Running
	I0318 12:46:33.309113 1125718 system_pods.go:61] "etcd-ha-328109" [46530523-a048-4fff-897d-1a59630b5533] Running
	I0318 12:46:33.309116 1125718 system_pods.go:61] "etcd-ha-328109-m02" [0ed8ba4d-7da4-4c6c-b545-5e8642214659] Running
	I0318 12:46:33.309120 1125718 system_pods.go:61] "etcd-ha-328109-m03" [56631b93-b509-45de-9ee0-d1b9676f52fe] Running
	I0318 12:46:33.309123 1125718 system_pods.go:61] "kindnet-lc74t" [5fe4e41e-4ddd-4e39-b1e2-746a32489418] Running
	I0318 12:46:33.309125 1125718 system_pods.go:61] "kindnet-t2pkv" [d848dd56-4ea1-472a-b378-21e36c834f81] Running
	I0318 12:46:33.309128 1125718 system_pods.go:61] "kindnet-vnv5b" [fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6] Running
	I0318 12:46:33.309135 1125718 system_pods.go:61] "kube-apiserver-ha-328109" [47b1b8fb-21f6-43d7-a607-4406dfec10b7] Running
	I0318 12:46:33.309139 1125718 system_pods.go:61] "kube-apiserver-ha-328109-m02" [fcd48f5d-2278-49f3-b4f0-0cad9ae74dc7] Running
	I0318 12:46:33.309144 1125718 system_pods.go:61] "kube-apiserver-ha-328109-m03" [ad5b3068-7d65-4897-a31e-b0cb094d2678] Running
	I0318 12:46:33.309149 1125718 system_pods.go:61] "kube-controller-manager-ha-328109" [ffef70fe-841f-41c7-a61b-bb205ce2c071] Running
	I0318 12:46:33.309156 1125718 system_pods.go:61] "kube-controller-manager-ha-328109-m02" [a5ecf731-7599-44e9-b20d-924bde2de123] Running
	I0318 12:46:33.309169 1125718 system_pods.go:61] "kube-controller-manager-ha-328109-m03" [338747b6-dae1-4cfa-9e28-1892c2d39b86] Running
	I0318 12:46:33.309174 1125718 system_pods.go:61] "kube-proxy-7zgrx" [6244fa40-af4d-480b-9256-db89d78b1d74] Running
	I0318 12:46:33.309178 1125718 system_pods.go:61] "kube-proxy-dhz88" [afb0afad-2b88-4abb-9039-aaf9c64ad920] Running
	I0318 12:46:33.309183 1125718 system_pods.go:61] "kube-proxy-zn8dk" [16d8de0d-3270-4989-b77d-c15f6206b4d4] Running
	I0318 12:46:33.309192 1125718 system_pods.go:61] "kube-scheduler-ha-328109" [a32fb0b4-2621-47dd-bb05-abb2e4cf928e] Running
	I0318 12:46:33.309197 1125718 system_pods.go:61] "kube-scheduler-ha-328109-m02" [14246dc3-5f5f-4d43-954c-5959db738742] Running
	I0318 12:46:33.309203 1125718 system_pods.go:61] "kube-scheduler-ha-328109-m03" [de782d6a-c138-4f4e-b52b-e06ca1eb0735] Running
	I0318 12:46:33.309206 1125718 system_pods.go:61] "kube-vip-ha-328109" [40c45da5-33e0-454b-8f4c-eca1d1ec3362] Running
	I0318 12:46:33.309209 1125718 system_pods.go:61] "kube-vip-ha-328109-m02" [0c0dc71f-79d7-48f0-8a4a-4480521e5705] Running
	I0318 12:46:33.309212 1125718 system_pods.go:61] "kube-vip-ha-328109-m03" [98e75a0b-1e8b-481e-8eea-34b26ed1d38c] Running
	I0318 12:46:33.309216 1125718 system_pods.go:61] "storage-provisioner" [90ce7ae6-4ac4-4c14-b2df-1a182f4d8086] Running
	I0318 12:46:33.309224 1125718 system_pods.go:74] duration metric: took 171.628679ms to wait for pod list to return data ...
	I0318 12:46:33.309234 1125718 default_sa.go:34] waiting for default service account to be created ...
	I0318 12:46:33.494702 1125718 request.go:629] Waited for 185.332478ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:46:33.494788 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/default/serviceaccounts
	I0318 12:46:33.494796 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.494806 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.494817 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.498593 1125718 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0318 12:46:33.498723 1125718 default_sa.go:45] found service account: "default"
	I0318 12:46:33.498741 1125718 default_sa.go:55] duration metric: took 189.497941ms for default service account to be created ...
	I0318 12:46:33.498750 1125718 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 12:46:33.695198 1125718 request.go:629] Waited for 196.376373ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:46:33.695267 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/namespaces/kube-system/pods
	I0318 12:46:33.695272 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.695280 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.695286 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.703736 1125718 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0318 12:46:33.710666 1125718 system_pods.go:86] 24 kube-system pods found
	I0318 12:46:33.710697 1125718 system_pods.go:89] "coredns-5dd5756b68-c78nc" [7c1159dc-6545-41a6-bb4a-75fdab519c9e] Running
	I0318 12:46:33.710706 1125718 system_pods.go:89] "coredns-5dd5756b68-p5xgj" [9a865f86-96cf-4687-9283-d2ebe5616d1a] Running
	I0318 12:46:33.710710 1125718 system_pods.go:89] "etcd-ha-328109" [46530523-a048-4fff-897d-1a59630b5533] Running
	I0318 12:46:33.710714 1125718 system_pods.go:89] "etcd-ha-328109-m02" [0ed8ba4d-7da4-4c6c-b545-5e8642214659] Running
	I0318 12:46:33.710718 1125718 system_pods.go:89] "etcd-ha-328109-m03" [56631b93-b509-45de-9ee0-d1b9676f52fe] Running
	I0318 12:46:33.710722 1125718 system_pods.go:89] "kindnet-lc74t" [5fe4e41e-4ddd-4e39-b1e2-746a32489418] Running
	I0318 12:46:33.710726 1125718 system_pods.go:89] "kindnet-t2pkv" [d848dd56-4ea1-472a-b378-21e36c834f81] Running
	I0318 12:46:33.710730 1125718 system_pods.go:89] "kindnet-vnv5b" [fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6] Running
	I0318 12:46:33.710734 1125718 system_pods.go:89] "kube-apiserver-ha-328109" [47b1b8fb-21f6-43d7-a607-4406dfec10b7] Running
	I0318 12:46:33.710739 1125718 system_pods.go:89] "kube-apiserver-ha-328109-m02" [fcd48f5d-2278-49f3-b4f0-0cad9ae74dc7] Running
	I0318 12:46:33.710745 1125718 system_pods.go:89] "kube-apiserver-ha-328109-m03" [ad5b3068-7d65-4897-a31e-b0cb094d2678] Running
	I0318 12:46:33.710751 1125718 system_pods.go:89] "kube-controller-manager-ha-328109" [ffef70fe-841f-41c7-a61b-bb205ce2c071] Running
	I0318 12:46:33.710758 1125718 system_pods.go:89] "kube-controller-manager-ha-328109-m02" [a5ecf731-7599-44e9-b20d-924bde2de123] Running
	I0318 12:46:33.710772 1125718 system_pods.go:89] "kube-controller-manager-ha-328109-m03" [338747b6-dae1-4cfa-9e28-1892c2d39b86] Running
	I0318 12:46:33.710778 1125718 system_pods.go:89] "kube-proxy-7zgrx" [6244fa40-af4d-480b-9256-db89d78b1d74] Running
	I0318 12:46:33.710787 1125718 system_pods.go:89] "kube-proxy-dhz88" [afb0afad-2b88-4abb-9039-aaf9c64ad920] Running
	I0318 12:46:33.710791 1125718 system_pods.go:89] "kube-proxy-zn8dk" [16d8de0d-3270-4989-b77d-c15f6206b4d4] Running
	I0318 12:46:33.710795 1125718 system_pods.go:89] "kube-scheduler-ha-328109" [a32fb0b4-2621-47dd-bb05-abb2e4cf928e] Running
	I0318 12:46:33.710799 1125718 system_pods.go:89] "kube-scheduler-ha-328109-m02" [14246dc3-5f5f-4d43-954c-5959db738742] Running
	I0318 12:46:33.710803 1125718 system_pods.go:89] "kube-scheduler-ha-328109-m03" [de782d6a-c138-4f4e-b52b-e06ca1eb0735] Running
	I0318 12:46:33.710808 1125718 system_pods.go:89] "kube-vip-ha-328109" [40c45da5-33e0-454b-8f4c-eca1d1ec3362] Running
	I0318 12:46:33.710814 1125718 system_pods.go:89] "kube-vip-ha-328109-m02" [0c0dc71f-79d7-48f0-8a4a-4480521e5705] Running
	I0318 12:46:33.710818 1125718 system_pods.go:89] "kube-vip-ha-328109-m03" [98e75a0b-1e8b-481e-8eea-34b26ed1d38c] Running
	I0318 12:46:33.710821 1125718 system_pods.go:89] "storage-provisioner" [90ce7ae6-4ac4-4c14-b2df-1a182f4d8086] Running
	I0318 12:46:33.710828 1125718 system_pods.go:126] duration metric: took 212.070029ms to wait for k8s-apps to be running ...
	I0318 12:46:33.710838 1125718 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 12:46:33.710895 1125718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:46:33.735959 1125718 system_svc.go:56] duration metric: took 25.107366ms WaitForService to wait for kubelet
	I0318 12:46:33.735996 1125718 kubeadm.go:576] duration metric: took 42.106764853s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:46:33.736026 1125718 node_conditions.go:102] verifying NodePressure condition ...
	I0318 12:46:33.894369 1125718 request.go:629] Waited for 158.246653ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.253:8443/api/v1/nodes
	I0318 12:46:33.894428 1125718 round_trippers.go:463] GET https://192.168.39.253:8443/api/v1/nodes
	I0318 12:46:33.894433 1125718 round_trippers.go:469] Request Headers:
	I0318 12:46:33.894442 1125718 round_trippers.go:473]     Accept: application/json, */*
	I0318 12:46:33.894446 1125718 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0318 12:46:33.898524 1125718 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0318 12:46:33.900095 1125718 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:46:33.900119 1125718 node_conditions.go:123] node cpu capacity is 2
	I0318 12:46:33.900134 1125718 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:46:33.900140 1125718 node_conditions.go:123] node cpu capacity is 2
	I0318 12:46:33.900146 1125718 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 12:46:33.900155 1125718 node_conditions.go:123] node cpu capacity is 2
	I0318 12:46:33.900161 1125718 node_conditions.go:105] duration metric: took 164.129313ms to run NodePressure ...
	I0318 12:46:33.900183 1125718 start.go:240] waiting for startup goroutines ...
	I0318 12:46:33.900240 1125718 start.go:254] writing updated cluster config ...
	I0318 12:46:33.900608 1125718 ssh_runner.go:195] Run: rm -f paused
	I0318 12:46:33.955240 1125718 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 12:46:33.957847 1125718 out.go:177] * Done! kubectl is now configured to use "ha-328109" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.460331049Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766264460308631,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b831b947-f24e-47fd-b4bf-db3740cd94ac name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.460933827Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d576cbe4-0db0-4b71-b585-82be38680b49 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.461020199Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d576cbe4-0db0-4b71-b585-82be38680b49 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.461426075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710765998607402140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b630b0fc05d4dd89718593f42880e41e071014b4d0f87791cba4fbf8cbe8785,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710765878638455375,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742842736e1b52735c8a18b3d61ed7ee1d6157f2ca03ec317995f36597c45ac6,PodSandboxId:d0fc4bb142f1e67adc1acb0fd05ed7615c6e71bf4d9c199240d1b14c7e506c6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765818180996776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818122582469,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818091864321,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41509d172d09cc4eecee3746dfb8ee1d320fc1c3797ddb1d709f61a48d8c377,PodSandboxId:de3686de3774df02e905a63c9a2f6c340478fd958e65a20db5acf3d838e7c03d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710765816486
274205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765812830557764,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d088b41ebc7b89cfc02aea70859e94e5a45b788a9c73a939733131ae29c4462,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710765794897477578,Labels:map[string]string{io.kub
ernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765791394253878,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.
pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765791364211319,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328
109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef,PodSandboxId:0d6a5a565490ec7bc679e6f77a039f680f53470f17b0cc60629e1ea627d8141e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765791361646729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-
ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a,PodSandboxId:a0c6f1dda955fa31cf1b04ce5ce4401c9c2bfef118b3bbaea519a53ffc2f3257,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765791299905420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d576cbe4-0db0-4b71-b585-82be38680b49 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.515375374Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5f50ba6d-9c6e-4d94-ba6f-85e38006b1ae name=/runtime.v1.RuntimeService/Version
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.515450747Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5f50ba6d-9c6e-4d94-ba6f-85e38006b1ae name=/runtime.v1.RuntimeService/Version
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.517633230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b73d352f-b2e8-41d1-adf9-2ffdf6683c90 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.518385848Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766264518357766,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b73d352f-b2e8-41d1-adf9-2ffdf6683c90 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.519262511Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b1dc7e1f-be71-47da-9f6b-4de6c2253408 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.519370200Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b1dc7e1f-be71-47da-9f6b-4de6c2253408 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.519611459Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710765998607402140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b630b0fc05d4dd89718593f42880e41e071014b4d0f87791cba4fbf8cbe8785,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710765878638455375,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742842736e1b52735c8a18b3d61ed7ee1d6157f2ca03ec317995f36597c45ac6,PodSandboxId:d0fc4bb142f1e67adc1acb0fd05ed7615c6e71bf4d9c199240d1b14c7e506c6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765818180996776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818122582469,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818091864321,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41509d172d09cc4eecee3746dfb8ee1d320fc1c3797ddb1d709f61a48d8c377,PodSandboxId:de3686de3774df02e905a63c9a2f6c340478fd958e65a20db5acf3d838e7c03d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710765816486
274205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765812830557764,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d088b41ebc7b89cfc02aea70859e94e5a45b788a9c73a939733131ae29c4462,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710765794897477578,Labels:map[string]string{io.kub
ernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765791394253878,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.
pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765791364211319,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328
109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef,PodSandboxId:0d6a5a565490ec7bc679e6f77a039f680f53470f17b0cc60629e1ea627d8141e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765791361646729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-
ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a,PodSandboxId:a0c6f1dda955fa31cf1b04ce5ce4401c9c2bfef118b3bbaea519a53ffc2f3257,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765791299905420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b1dc7e1f-be71-47da-9f6b-4de6c2253408 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.569221888Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f43b4c09-02d8-4cf7-a472-e0a479a6ecb8 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.569351910Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f43b4c09-02d8-4cf7-a472-e0a479a6ecb8 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.571548234Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b342c1df-8a70-4d5e-b804-0bf3e6b15255 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.571968447Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766264571945503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b342c1df-8a70-4d5e-b804-0bf3e6b15255 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.572722863Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33b8764b-caf4-4621-a3cd-aefc0347dd04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.572804612Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33b8764b-caf4-4621-a3cd-aefc0347dd04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.573133093Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710765998607402140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b630b0fc05d4dd89718593f42880e41e071014b4d0f87791cba4fbf8cbe8785,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710765878638455375,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742842736e1b52735c8a18b3d61ed7ee1d6157f2ca03ec317995f36597c45ac6,PodSandboxId:d0fc4bb142f1e67adc1acb0fd05ed7615c6e71bf4d9c199240d1b14c7e506c6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765818180996776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818122582469,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818091864321,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41509d172d09cc4eecee3746dfb8ee1d320fc1c3797ddb1d709f61a48d8c377,PodSandboxId:de3686de3774df02e905a63c9a2f6c340478fd958e65a20db5acf3d838e7c03d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710765816486
274205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765812830557764,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d088b41ebc7b89cfc02aea70859e94e5a45b788a9c73a939733131ae29c4462,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710765794897477578,Labels:map[string]string{io.kub
ernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765791394253878,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.
pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765791364211319,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328
109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef,PodSandboxId:0d6a5a565490ec7bc679e6f77a039f680f53470f17b0cc60629e1ea627d8141e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765791361646729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-
ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a,PodSandboxId:a0c6f1dda955fa31cf1b04ce5ce4401c9c2bfef118b3bbaea519a53ffc2f3257,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765791299905420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33b8764b-caf4-4621-a3cd-aefc0347dd04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.619809694Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a97dad8d-6aa7-452b-b5ca-66092ce1e36d name=/runtime.v1.RuntimeService/Version
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.619886728Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a97dad8d-6aa7-452b-b5ca-66092ce1e36d name=/runtime.v1.RuntimeService/Version
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.622027262Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6de42070-2540-445a-aeb9-9c4f10c5458c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.622863649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766264622838905,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6de42070-2540-445a-aeb9-9c4f10c5458c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.623835998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=852abd08-f72b-4edc-9d3b-594e8102a776 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.623921134Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=852abd08-f72b-4edc-9d3b-594e8102a776 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:51:04 ha-328109 crio[681]: time="2024-03-18 12:51:04.624234441Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710765998607402140,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0b630b0fc05d4dd89718593f42880e41e071014b4d0f87791cba4fbf8cbe8785,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710765878638455375,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:742842736e1b52735c8a18b3d61ed7ee1d6157f2ca03ec317995f36597c45ac6,PodSandboxId:d0fc4bb142f1e67adc1acb0fd05ed7615c6e71bf4d9c199240d1b14c7e506c6e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710765818180996776,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-l
og,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818122582469,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\
"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710765818091864321,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations
:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41509d172d09cc4eecee3746dfb8ee1d320fc1c3797ddb1d709f61a48d8c377,PodSandboxId:de3686de3774df02e905a63c9a2f6c340478fd958e65a20db5acf3d838e7c03d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710765816486
274205,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710765812830557764,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8d088b41ebc7b89cfc02aea70859e94e5a45b788a9c73a939733131ae29c4462,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710765794897477578,Labels:map[string]string{io.kub
ernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710765791394253878,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.
pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710765791364211319,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328
109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef,PodSandboxId:0d6a5a565490ec7bc679e6f77a039f680f53470f17b0cc60629e1ea627d8141e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710765791361646729,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-
ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a,PodSandboxId:a0c6f1dda955fa31cf1b04ce5ce4401c9c2bfef118b3bbaea519a53ffc2f3257,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710765791299905420,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=852abd08-f72b-4edc-9d3b-594e8102a776 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c5b3318798546       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   10b35c5d18ac5       busybox-5b5d89c9d6-fz4kl
	0b630b0fc05d4       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      6 minutes ago       Running             kube-vip                  1                   2f84d6cd36a0e       kube-vip-ha-328109
	742842736e1b5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Running             storage-provisioner       0                   d0fc4bb142f1e       storage-provisioner
	82a8d2ac6a60c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   b487ae421169c       coredns-5dd5756b68-p5xgj
	f2c5cd4a72423       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   16503713d1986       coredns-5dd5756b68-c78nc
	f41509d172d09       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago       Running             kindnet-cni               0                   de3686de3774d       kindnet-vnv5b
	f8d915a384e6a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago       Running             kube-proxy                0                   35275a602be1c       kube-proxy-dhz88
	8d088b41ebc7b       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Exited              kube-vip                  0                   2f84d6cd36a0e       kube-vip-ha-328109
	55e393cf77a1b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago       Running             etcd                      0                   8231d33571b5e       etcd-ha-328109
	de552ed42d495       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago       Running             kube-scheduler            0                   8cfa0459c6e2a       kube-scheduler-ha-328109
	a10929bb97372       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago       Running             kube-controller-manager   0                   0d6a5a565490e       kube-controller-manager-ha-328109
	7e2150d8010e2       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago       Running             kube-apiserver            0                   a0c6f1dda955f       kube-apiserver-ha-328109
	
	
	==> coredns [82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135] <==
	[INFO] 10.244.0.4:52673 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004169921s
	[INFO] 10.244.0.4:48925 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000253358s
	[INFO] 10.244.0.4:56631 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152161s
	[INFO] 10.244.0.4:45190 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103827s
	[INFO] 10.244.2.2:34185 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105521s
	[INFO] 10.244.2.2:44888 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000730863s
	[INFO] 10.244.1.2:40647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166359s
	[INFO] 10.244.1.2:57968 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001882507s
	[INFO] 10.244.1.2:55297 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096788s
	[INFO] 10.244.1.2:36989 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088322s
	[INFO] 10.244.1.2:37677 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205894s
	[INFO] 10.244.1.2:32814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074605s
	[INFO] 10.244.1.2:44489 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102528s
	[INFO] 10.244.0.4:53607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206955s
	[INFO] 10.244.2.2:47974 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000313502s
	[INFO] 10.244.1.2:49641 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193514s
	[INFO] 10.244.1.2:52193 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126417s
	[INFO] 10.244.1.2:55887 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104434s
	[INFO] 10.244.0.4:43288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014747s
	[INFO] 10.244.0.4:57574 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192178s
	[INFO] 10.244.0.4:58440 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128408s
	[INFO] 10.244.2.2:50297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168343s
	[INFO] 10.244.2.2:37188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133774s
	[INFO] 10.244.1.2:33883 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095091s
	[INFO] 10.244.1.2:45785 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123693s
	
	
	==> coredns [f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562] <==
	[INFO] 10.244.2.2:51093 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001672821s
	[INFO] 10.244.1.2:49953 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000114538s
	[INFO] 10.244.0.4:45239 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117638s
	[INFO] 10.244.0.4:54630 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003946825s
	[INFO] 10.244.0.4:37807 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185941s
	[INFO] 10.244.0.4:54881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000227886s
	[INFO] 10.244.2.2:43048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261065s
	[INFO] 10.244.2.2:43023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001686526s
	[INFO] 10.244.2.2:59097 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204051s
	[INFO] 10.244.2.2:49621 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000262805s
	[INFO] 10.244.2.2:48119 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371219s
	[INFO] 10.244.2.2:49912 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148592s
	[INFO] 10.244.1.2:60652 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0016374s
	[INFO] 10.244.0.4:55891 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079534s
	[INFO] 10.244.0.4:53025 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231262s
	[INFO] 10.244.0.4:39659 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116818s
	[INFO] 10.244.2.2:48403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125802s
	[INFO] 10.244.2.2:42106 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092079s
	[INFO] 10.244.2.2:41088 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000204572s
	[INFO] 10.244.1.2:60379 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108875s
	[INFO] 10.244.0.4:42381 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008263s
	[INFO] 10.244.2.2:47207 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000237181s
	[INFO] 10.244.2.2:44002 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102925s
	[INFO] 10.244.1.2:54332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126486s
	[INFO] 10.244.1.2:38590 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000245357s
	
	
	==> describe nodes <==
	Name:               ha-328109
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_43_22_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:43:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:50:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:46:56 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:46:56 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:46:56 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:46:56 +0000   Mon, 18 Mar 2024 12:43:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-328109
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8b3a9b95f2141b891e3cee14aaad62e
	  System UUID:                a8b3a9b9-5f21-41b8-91e3-cee14aaad62e
	  Boot ID:                    906b8684-634a-4838-bb8e-d090694f9649
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-fz4kl             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  kube-system                 coredns-5dd5756b68-c78nc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m33s
	  kube-system                 coredns-5dd5756b68-p5xgj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m33s
	  kube-system                 etcd-ha-328109                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m43s
	  kube-system                 kindnet-vnv5b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m33s
	  kube-system                 kube-apiserver-ha-328109             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 kube-controller-manager-ha-328109    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 kube-proxy-dhz88                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m33s
	  kube-system                 kube-scheduler-ha-328109             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m45s
	  kube-system                 kube-vip-ha-328109                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m43s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m31s                  kube-proxy       
	  Normal  NodeHasSufficientPID     7m54s (x7 over 7m54s)  kubelet          Node ha-328109 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 7m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m54s (x8 over 7m54s)  kubelet          Node ha-328109 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m54s (x8 over 7m54s)  kubelet          Node ha-328109 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m43s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m43s                  kubelet          Node ha-328109 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m43s                  kubelet          Node ha-328109 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m43s                  kubelet          Node ha-328109 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m34s                  node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal  NodeReady                7m27s                  kubelet          Node ha-328109 status is now: NodeReady
	  Normal  RegisteredNode           6m12s                  node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal  RegisteredNode           4m58s                  node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	
	
	Name:               ha-328109-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_44_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:44:28 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:47:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 12:47:01 +0000   Mon, 18 Mar 2024 12:48:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 12:47:01 +0000   Mon, 18 Mar 2024 12:48:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 12:47:01 +0000   Mon, 18 Mar 2024 12:48:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 12:47:01 +0000   Mon, 18 Mar 2024 12:48:25 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-328109-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 148457ca2d4c4c78bdc5b74dba85e93e
	  System UUID:                148457ca-2d4c-4c78-bdc5-b74dba85e93e
	  Boot ID:                    8d0cadc9-1888-4de4-9f61-a20e3052d92f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-sx4mf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-328109-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m26s
	  kube-system                 kindnet-lc74t                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m37s
	  kube-system                 kube-apiserver-ha-328109-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  kube-system                 kube-controller-manager-ha-328109-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-proxy-7zgrx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m37s
	  kube-system                 kube-scheduler-ha-328109-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-vip-ha-328109-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m24s  kube-proxy       
	  Normal  RegisteredNode  6m13s  node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  RegisteredNode  4m59s  node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  NodeNotReady    2m40s  node-controller  Node ha-328109-m02 status is now: NodeNotReady
	
	
	Name:               ha-328109-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_45_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:45:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:50:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:46:49 +0000   Mon, 18 Mar 2024 12:45:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:46:49 +0000   Mon, 18 Mar 2024 12:45:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:46:49 +0000   Mon, 18 Mar 2024 12:45:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:46:49 +0000   Mon, 18 Mar 2024 12:46:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-328109-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 4fab87c426444aa8b3b6e0542502fa6e
	  System UUID:                4fab87c4-2644-4aa8-b3b6-e0542502fa6e
	  Boot ID:                    b800ccda-6ae3-43fc-9ff4-4f258fdf7181
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-gv6tf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 etcd-ha-328109-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m16s
	  kube-system                 kindnet-t2pkv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m17s
	  kube-system                 kube-apiserver-ha-328109-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-controller-manager-ha-328109-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m16s
	  kube-system                 kube-proxy-zn8dk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-scheduler-ha-328109-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-vip-ha-328109-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m     kube-proxy       
	  Normal  RegisteredNode  5m15s  node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	  Normal  RegisteredNode  5m13s  node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	  Normal  RegisteredNode  4m59s  node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	
	
	Name:               ha-328109-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_47_16_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:47:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:51:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:47:46 +0000   Mon, 18 Mar 2024 12:47:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:47:46 +0000   Mon, 18 Mar 2024 12:47:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:47:46 +0000   Mon, 18 Mar 2024 12:47:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:47:46 +0000   Mon, 18 Mar 2024 12:47:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-328109-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac08f798ce4148b48f36040f95b7eaf9
	  System UUID:                ac08f798-ce41-48b4-8f36-040f95b7eaf9
	  Boot ID:                    3d12ce0a-9b18-44af-8f5b-5098664adc80
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ggcw6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m49s
	  kube-system                 kube-proxy-4fxbn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m50s (x5 over 3m51s)  kubelet          Node ha-328109-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x5 over 3m51s)  kubelet          Node ha-328109-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x5 over 3m51s)  kubelet          Node ha-328109-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m49s                  node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal  RegisteredNode           3m48s                  node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal  NodeReady                3m40s                  kubelet          Node ha-328109-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar18 12:42] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052749] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044694] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.610763] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.534660] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.648943] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.065651] systemd-fstab-generator[602]: Ignoring "noauto" option for root device
	[  +0.058901] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058875] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.159253] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.141446] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.251865] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[Mar18 12:43] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.059542] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.985090] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +1.363754] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.738793] kauditd_printk_skb: 40 callbacks suppressed
	[  +1.856189] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[ +11.678244] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.089183] kauditd_printk_skb: 37 callbacks suppressed
	[Mar18 12:44] kauditd_printk_skb: 27 callbacks suppressed
	
	
	==> etcd [55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205] <==
	{"level":"warn","ts":"2024-03-18T12:51:04.926167Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:04.933837Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:04.940996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:04.94487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:04.957415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:04.967517Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:04.975669Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:04.979221Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:04.983225Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:04.991028Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:04.99818Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.004577Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.010179Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.013981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.023003Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.029816Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.035558Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.041525Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.044724Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.046334Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.05757Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.06519Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.072688Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.102584Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-18T12:51:05.145623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"3773e8bb706c8f02","from":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 12:51:05 up 8 min,  0 users,  load average: 0.59, 0.32, 0.19
	Linux ha-328109 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [f41509d172d09cc4eecee3746dfb8ee1d320fc1c3797ddb1d709f61a48d8c377] <==
	I0318 12:50:28.631780       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:50:38.639557       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:50:38.639614       1 main.go:227] handling current node
	I0318 12:50:38.639628       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:50:38.639637       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:50:38.639819       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0318 12:50:38.639860       1 main.go:250] Node ha-328109-m03 has CIDR [10.244.2.0/24] 
	I0318 12:50:38.640184       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:50:38.640227       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:50:48.654800       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:50:48.654860       1 main.go:227] handling current node
	I0318 12:50:48.654884       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:50:48.654894       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:50:48.655057       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0318 12:50:48.655068       1 main.go:250] Node ha-328109-m03 has CIDR [10.244.2.0/24] 
	I0318 12:50:48.655470       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:50:48.655514       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:50:58.666180       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:50:58.666242       1 main.go:227] handling current node
	I0318 12:50:58.666256       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:50:58.666265       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:50:58.666471       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0318 12:50:58.666512       1 main.go:250] Node ha-328109-m03 has CIDR [10.244.2.0/24] 
	I0318 12:50:58.666607       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:50:58.666615       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a] <==
	I0318 12:44:38.132630       1 trace.go:236] Trace[1342520798]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:01966c69-0057-4e3f-82ee-de024a8d9bba,client:192.168.39.254,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-328109,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (18-Mar-2024 12:44:32.520) (total time: 5612ms):
	Trace[1342520798]: ["GuaranteedUpdate etcd3" audit-id:01966c69-0057-4e3f-82ee-de024a8d9bba,key:/leases/kube-node-lease/ha-328109,type:*coordination.Lease,resource:leases.coordination.k8s.io 5611ms (12:44:32.520)
	Trace[1342520798]:  ---"Txn call completed" 5611ms (12:44:38.132)]
	Trace[1342520798]: [5.612031088s] [5.612031088s] END
	I0318 12:44:38.132721       1 trace.go:236] Trace[624459832]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:b5c533b5-cc1b-497e-b2ba-30e994190195,client:192.168.39.254,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 12:44:34.414) (total time: 3717ms):
	Trace[624459832]: ["Create etcd3" audit-id:b5c533b5-cc1b-497e-b2ba-30e994190195,key:/events/kube-system/kube-apiserver-ha-328109.17bddc7fae054a4d,type:*core.Event,resource:events 3717ms (12:44:34.415)
	Trace[624459832]:  ---"Txn call succeeded" 3717ms (12:44:38.132)]
	Trace[624459832]: [3.717787534s] [3.717787534s] END
	I0318 12:44:38.132842       1 trace.go:236] Trace[962055501]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a1297aa5-524a-4a72-85b9-9417e2477763,client:192.168.39.246,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 12:44:36.190) (total time: 1942ms):
	Trace[962055501]: ["Create etcd3" audit-id:a1297aa5-524a-4a72-85b9-9417e2477763,key:/events/kube-system/etcd-ha-328109-m02.17bddc7e7d5ff8bf,type:*core.Event,resource:events 1942ms (12:44:36.190)
	Trace[962055501]:  ---"Txn call succeeded" 1942ms (12:44:38.132)]
	Trace[962055501]: [1.942762555s] [1.942762555s] END
	I0318 12:44:38.135895       1 trace.go:236] Trace[1717943839]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:5543fdf2-ef41-4b60-8ac5-d880490b9c10,client:192.168.39.246,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 12:44:33.356) (total time: 4779ms):
	Trace[1717943839]: ["Create etcd3" audit-id:5543fdf2-ef41-4b60-8ac5-d880490b9c10,key:/pods/kube-system/kube-apiserver-ha-328109-m02,type:*core.Pod,resource:pods 4778ms (12:44:33.357)
	Trace[1717943839]:  ---"Txn call succeeded" 4773ms (12:44:38.130)]
	Trace[1717943839]: [4.779493649s] [4.779493649s] END
	I0318 12:44:38.136224       1 trace.go:236] Trace[1951338847]: "List" accept:application/json, */*,audit-id:4c149973-5734-4872-8981-83a6b3baae31,client:192.168.39.253,protocol:HTTP/2.0,resource:nodes,scope:cluster,url:/api/v1/nodes,user-agent:kindnetd/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (18-Mar-2024 12:44:37.330) (total time: 805ms):
	Trace[1951338847]: ["List(recursive=true) etcd3" audit-id:4c149973-5734-4872-8981-83a6b3baae31,key:/minions,resourceVersion:,resourceVersionMatch:,limit:0,continue: 805ms (12:44:37.330)]
	Trace[1951338847]: [805.489973ms] [805.489973ms] END
	I0318 12:44:38.220782       1 trace.go:236] Trace[396697757]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:dbbe0d76-3db9-4e4d-9bf4-58fc51de9768,client:192.168.39.246,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (18-Mar-2024 12:44:37.116) (total time: 1103ms):
	Trace[396697757]: [1.103969292s] [1.103969292s] END
	I0318 12:44:38.229736       1 trace.go:236] Trace[1623968541]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.253,type:*v1.Endpoints,resource:apiServerIPInfo (18-Mar-2024 12:44:36.707) (total time: 1522ms):
	Trace[1623968541]: ---"initial value restored" 1424ms (12:44:38.131)
	Trace[1623968541]: ---"Transaction prepared" 58ms (12:44:38.190)
	Trace[1623968541]: [1.522252407s] [1.522252407s] END
	
	
	==> kube-controller-manager [a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef] <==
	E0318 12:47:14.299000       1 certificate_controller.go:146] Sync csr-lxffk failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-lxffk": the object has been modified; please apply your changes to the latest version and try again
	I0318 12:47:15.811781       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-328109-m04\" does not exist"
	I0318 12:47:15.897387       1 range_allocator.go:380] "Set node PodCIDR" node="ha-328109-m04" podCIDRs=["10.244.3.0/24"]
	I0318 12:47:15.979277       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ssq4l"
	I0318 12:47:15.979578       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-czqfw"
	I0318 12:47:16.153839       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-ssq4l"
	I0318 12:47:16.181270       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-czqfw"
	I0318 12:47:16.725872       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-pgzmj"
	I0318 12:47:16.828489       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-x4dsb"
	I0318 12:47:16.880172       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-pgzmj"
	I0318 12:47:20.660548       1 event.go:307] "Event occurred" object="ha-328109-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller"
	I0318 12:47:20.674516       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-328109-m04"
	I0318 12:47:25.743602       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-328109-m04"
	I0318 12:48:25.702748       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-328109-m04"
	I0318 12:48:25.705744       1 event.go:307] "Event occurred" object="ha-328109-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-328109-m02 status is now: NodeNotReady"
	I0318 12:48:25.738448       1 event.go:307] "Event occurred" object="kube-system/kindnet-lc74t" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.769028       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-7zgrx" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.783574       1 event.go:307] "Event occurred" object="kube-system/kube-vip-ha-328109-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.815775       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-ha-328109-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.830749       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-ha-328109-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.846414       1 event.go:307] "Event occurred" object="kube-system/etcd-ha-328109-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.863321       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-ha-328109-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.877211       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-sx4mf" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 12:48:25.894485       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.539547ms"
	I0318 12:48:25.895906       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="125.045µs"
	
	
	==> kube-proxy [f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6] <==
	I0318 12:43:33.033214       1 server_others.go:69] "Using iptables proxy"
	I0318 12:43:33.058459       1 node.go:141] Successfully retrieved node IP: 192.168.39.253
	I0318 12:43:33.105382       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:43:33.105433       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:43:33.107947       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:43:33.108833       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:43:33.109182       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:43:33.109219       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:43:33.110865       1 config.go:188] "Starting service config controller"
	I0318 12:43:33.111906       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:43:33.112178       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:43:33.112212       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:43:33.114910       1 config.go:315] "Starting node config controller"
	I0318 12:43:33.114956       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 12:43:33.212384       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:43:33.212446       1 shared_informer.go:318] Caches are synced for service config
	I0318 12:43:33.215649       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6] <==
	I0318 12:47:16.005054       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-ssq4l" node="ha-328109-m04"
	E0318 12:47:16.015320       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-czqfw\": pod kindnet-czqfw is already assigned to node \"ha-328109-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-czqfw" node="ha-328109-m04"
	E0318 12:47:16.015461       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 5410e336-df2d-47f5-bed4-f8c92278a1a6(kube-system/kindnet-czqfw) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.015499       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-czqfw\": pod kindnet-czqfw is already assigned to node \"ha-328109-m04\"" pod="kube-system/kindnet-czqfw"
	I0318 12:47:16.015529       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-czqfw" node="ha-328109-m04"
	E0318 12:47:16.034281       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-4fxbn\": pod kube-proxy-4fxbn is already assigned to node \"ha-328109-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-4fxbn" node="ha-328109-m04"
	E0318 12:47:16.034546       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod d1f2a6c1-8e3c-45ad-8839-d641a80a4d03(kube-system/kube-proxy-4fxbn) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.034695       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-4fxbn\": pod kube-proxy-4fxbn is already assigned to node \"ha-328109-m04\"" pod="kube-system/kube-proxy-4fxbn"
	I0318 12:47:16.034757       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-4fxbn" node="ha-328109-m04"
	E0318 12:47:16.035526       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-m2qh7\": pod kindnet-m2qh7 is already assigned to node \"ha-328109-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-m2qh7" node="ha-328109-m04"
	E0318 12:47:16.035608       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 3abff82c-01b1-4ef2-b5e4-ef9ea8642d5b(kube-system/kindnet-m2qh7) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.035636       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-m2qh7\": pod kindnet-m2qh7 is already assigned to node \"ha-328109-m04\"" pod="kube-system/kindnet-m2qh7"
	I0318 12:47:16.035687       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-m2qh7" node="ha-328109-m04"
	E0318 12:47:16.798722       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-x4dsb\": pod kindnet-x4dsb is being deleted, cannot be assigned to a host" plugin="DefaultBinder" pod="kube-system/kindnet-x4dsb" node="ha-328109-m04"
	E0318 12:47:16.798797       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 8988ac73-5185-4ef3-a282-6982e6f09c9d(kube-system/kindnet-x4dsb) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.798824       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-x4dsb\": pod kindnet-x4dsb is being deleted, cannot be assigned to a host" pod="kube-system/kindnet-x4dsb"
	I0318 12:47:16.798841       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-x4dsb" node="ha-328109-m04"
	E0318 12:47:16.799239       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-pgzmj\": pod kindnet-pgzmj is already assigned to node \"ha-328109-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-pgzmj" node="ha-328109-m04"
	E0318 12:47:16.799295       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod ec2c8ced-e089-4868-920f-c77eaa97ccca(kube-system/kindnet-pgzmj) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.799313       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-pgzmj\": pod kindnet-pgzmj is already assigned to node \"ha-328109-m04\"" pod="kube-system/kindnet-pgzmj"
	I0318 12:47:16.799327       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-pgzmj" node="ha-328109-m04"
	E0318 12:47:16.800987       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-ggcw6\": pod kindnet-ggcw6 is already assigned to node \"ha-328109-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-ggcw6" node="ha-328109-m04"
	E0318 12:47:16.805150       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod da0dab40-34a4-4213-9224-b1bef5273e51(kube-system/kindnet-ggcw6) wasn't assumed so cannot be forgotten"
	E0318 12:47:16.805941       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-ggcw6\": pod kindnet-ggcw6 is already assigned to node \"ha-328109-m04\"" pod="kube-system/kindnet-ggcw6"
	I0318 12:47:16.806190       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-ggcw6" node="ha-328109-m04"
	
	
	==> kubelet <==
	Mar 18 12:46:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:46:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:46:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:46:35 ha-328109 kubelet[1375]: I0318 12:46:35.061808    1375 topology_manager.go:215] "Topology Admit Handler" podUID="5a0215bb-df62-44b9-9d60-d45778880b8b" podNamespace="default" podName="busybox-5b5d89c9d6-fz4kl"
	Mar 18 12:46:35 ha-328109 kubelet[1375]: I0318 12:46:35.224542    1375 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4twgx\" (UniqueName: \"kubernetes.io/projected/5a0215bb-df62-44b9-9d60-d45778880b8b-kube-api-access-4twgx\") pod \"busybox-5b5d89c9d6-fz4kl\" (UID: \"5a0215bb-df62-44b9-9d60-d45778880b8b\") " pod="default/busybox-5b5d89c9d6-fz4kl"
	Mar 18 12:47:21 ha-328109 kubelet[1375]: E0318 12:47:21.240726    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:47:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:47:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:47:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:47:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:48:21 ha-328109 kubelet[1375]: E0318 12:48:21.240761    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:48:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:48:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:48:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:48:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:49:21 ha-328109 kubelet[1375]: E0318 12:49:21.241837    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:49:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:49:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:49:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:49:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:50:21 ha-328109 kubelet[1375]: E0318 12:50:21.240220    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:50:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:50:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:50:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:50:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-328109 -n ha-328109
helpers_test.go:261: (dbg) Run:  kubectl --context ha-328109 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (53.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (387.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-328109 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-328109 -v=7 --alsologtostderr
E0318 12:51:24.905062 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:51:52.590715 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-328109 -v=7 --alsologtostderr: exit status 82 (2m2.725751062s)

                                                
                                                
-- stdout --
	* Stopping node "ha-328109-m04"  ...
	* Stopping node "ha-328109-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:51:06.723203 1131058 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:51:06.723445 1131058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:51:06.723453 1131058 out.go:304] Setting ErrFile to fd 2...
	I0318 12:51:06.723458 1131058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:51:06.723628 1131058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:51:06.723872 1131058 out.go:298] Setting JSON to false
	I0318 12:51:06.723962 1131058 mustload.go:65] Loading cluster: ha-328109
	I0318 12:51:06.724317 1131058 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:51:06.724429 1131058 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:51:06.724615 1131058 mustload.go:65] Loading cluster: ha-328109
	I0318 12:51:06.724743 1131058 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:51:06.724792 1131058 stop.go:39] StopHost: ha-328109-m04
	I0318 12:51:06.725162 1131058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:06.725209 1131058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:06.741051 1131058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38481
	I0318 12:51:06.741630 1131058 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:06.742251 1131058 main.go:141] libmachine: Using API Version  1
	I0318 12:51:06.742272 1131058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:06.742612 1131058 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:06.745493 1131058 out.go:177] * Stopping node "ha-328109-m04"  ...
	I0318 12:51:06.747018 1131058 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 12:51:06.747069 1131058 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:51:06.747275 1131058 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 12:51:06.747297 1131058 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:51:06.750066 1131058 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:51:06.750563 1131058 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:47:00 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:51:06.750599 1131058 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:51:06.750745 1131058 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:51:06.750941 1131058 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:51:06.751139 1131058 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:51:06.751295 1131058 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	I0318 12:51:06.837782 1131058 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 12:51:06.893349 1131058 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 12:51:06.949385 1131058 main.go:141] libmachine: Stopping "ha-328109-m04"...
	I0318 12:51:06.949416 1131058 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:51:06.950931 1131058 main.go:141] libmachine: (ha-328109-m04) Calling .Stop
	I0318 12:51:06.954350 1131058 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 0/120
	I0318 12:51:07.955807 1131058 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 1/120
	I0318 12:51:08.958218 1131058 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:51:08.959604 1131058 main.go:141] libmachine: Machine "ha-328109-m04" was stopped.
	I0318 12:51:08.959625 1131058 stop.go:75] duration metric: took 2.212609266s to stop
	I0318 12:51:08.959650 1131058 stop.go:39] StopHost: ha-328109-m03
	I0318 12:51:08.959979 1131058 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:51:08.960029 1131058 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:51:08.976108 1131058 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40681
	I0318 12:51:08.976749 1131058 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:51:08.977333 1131058 main.go:141] libmachine: Using API Version  1
	I0318 12:51:08.977365 1131058 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:51:08.977794 1131058 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:51:08.979949 1131058 out.go:177] * Stopping node "ha-328109-m03"  ...
	I0318 12:51:08.981275 1131058 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 12:51:08.981306 1131058 main.go:141] libmachine: (ha-328109-m03) Calling .DriverName
	I0318 12:51:08.981511 1131058 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 12:51:08.981535 1131058 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHHostname
	I0318 12:51:08.984600 1131058 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:51:08.985014 1131058 main.go:141] libmachine: (ha-328109-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:6e:ac", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:45:09 +0000 UTC Type:0 Mac:52:54:00:13:6e:ac Iaid: IPaddr:192.168.39.241 Prefix:24 Hostname:ha-328109-m03 Clientid:01:52:54:00:13:6e:ac}
	I0318 12:51:08.985060 1131058 main.go:141] libmachine: (ha-328109-m03) DBG | domain ha-328109-m03 has defined IP address 192.168.39.241 and MAC address 52:54:00:13:6e:ac in network mk-ha-328109
	I0318 12:51:08.985213 1131058 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHPort
	I0318 12:51:08.985386 1131058 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHKeyPath
	I0318 12:51:08.985557 1131058 main.go:141] libmachine: (ha-328109-m03) Calling .GetSSHUsername
	I0318 12:51:08.985687 1131058 sshutil.go:53] new ssh client: &{IP:192.168.39.241 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m03/id_rsa Username:docker}
	I0318 12:51:09.076482 1131058 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 12:51:09.133422 1131058 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 12:51:09.190002 1131058 main.go:141] libmachine: Stopping "ha-328109-m03"...
	I0318 12:51:09.190032 1131058 main.go:141] libmachine: (ha-328109-m03) Calling .GetState
	I0318 12:51:09.191713 1131058 main.go:141] libmachine: (ha-328109-m03) Calling .Stop
	I0318 12:51:09.195282 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 0/120
	I0318 12:51:10.196947 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 1/120
	I0318 12:51:11.198257 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 2/120
	I0318 12:51:12.199693 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 3/120
	I0318 12:51:13.201969 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 4/120
	I0318 12:51:14.203977 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 5/120
	I0318 12:51:15.205691 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 6/120
	I0318 12:51:16.207087 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 7/120
	I0318 12:51:17.209059 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 8/120
	I0318 12:51:18.210626 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 9/120
	I0318 12:51:19.212673 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 10/120
	I0318 12:51:20.213939 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 11/120
	I0318 12:51:21.215405 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 12/120
	I0318 12:51:22.216782 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 13/120
	I0318 12:51:23.218332 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 14/120
	I0318 12:51:24.220050 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 15/120
	I0318 12:51:25.221699 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 16/120
	I0318 12:51:26.222903 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 17/120
	I0318 12:51:27.224374 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 18/120
	I0318 12:51:28.225669 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 19/120
	I0318 12:51:29.227627 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 20/120
	I0318 12:51:30.229304 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 21/120
	I0318 12:51:31.230854 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 22/120
	I0318 12:51:32.232287 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 23/120
	I0318 12:51:33.233740 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 24/120
	I0318 12:51:34.235510 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 25/120
	I0318 12:51:35.237788 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 26/120
	I0318 12:51:36.239202 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 27/120
	I0318 12:51:37.240611 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 28/120
	I0318 12:51:38.242002 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 29/120
	I0318 12:51:39.243783 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 30/120
	I0318 12:51:40.245248 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 31/120
	I0318 12:51:41.246708 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 32/120
	I0318 12:51:42.248274 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 33/120
	I0318 12:51:43.249487 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 34/120
	I0318 12:51:44.250819 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 35/120
	I0318 12:51:45.252059 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 36/120
	I0318 12:51:46.253305 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 37/120
	I0318 12:51:47.254494 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 38/120
	I0318 12:51:48.255702 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 39/120
	I0318 12:51:49.257365 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 40/120
	I0318 12:51:50.259556 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 41/120
	I0318 12:51:51.260980 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 42/120
	I0318 12:51:52.263054 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 43/120
	I0318 12:51:53.264415 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 44/120
	I0318 12:51:54.266157 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 45/120
	I0318 12:51:55.268175 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 46/120
	I0318 12:51:56.269615 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 47/120
	I0318 12:51:57.271127 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 48/120
	I0318 12:51:58.273069 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 49/120
	I0318 12:51:59.274859 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 50/120
	I0318 12:52:00.276312 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 51/120
	I0318 12:52:01.277973 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 52/120
	I0318 12:52:02.279461 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 53/120
	I0318 12:52:03.280962 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 54/120
	I0318 12:52:04.282863 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 55/120
	I0318 12:52:05.284190 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 56/120
	I0318 12:52:06.285501 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 57/120
	I0318 12:52:07.286793 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 58/120
	I0318 12:52:08.288448 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 59/120
	I0318 12:52:09.290379 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 60/120
	I0318 12:52:10.291777 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 61/120
	I0318 12:52:11.293853 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 62/120
	I0318 12:52:12.295322 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 63/120
	I0318 12:52:13.296659 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 64/120
	I0318 12:52:14.298631 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 65/120
	I0318 12:52:15.300075 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 66/120
	I0318 12:52:16.301644 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 67/120
	I0318 12:52:17.303087 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 68/120
	I0318 12:52:18.304920 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 69/120
	I0318 12:52:19.306586 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 70/120
	I0318 12:52:20.307823 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 71/120
	I0318 12:52:21.309325 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 72/120
	I0318 12:52:22.310795 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 73/120
	I0318 12:52:23.312246 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 74/120
	I0318 12:52:24.314451 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 75/120
	I0318 12:52:25.315810 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 76/120
	I0318 12:52:26.317184 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 77/120
	I0318 12:52:27.318826 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 78/120
	I0318 12:52:28.320288 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 79/120
	I0318 12:52:29.322285 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 80/120
	I0318 12:52:30.323720 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 81/120
	I0318 12:52:31.325161 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 82/120
	I0318 12:52:32.327380 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 83/120
	I0318 12:52:33.328821 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 84/120
	I0318 12:52:34.330263 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 85/120
	I0318 12:52:35.331583 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 86/120
	I0318 12:52:36.332980 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 87/120
	I0318 12:52:37.334271 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 88/120
	I0318 12:52:38.335740 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 89/120
	I0318 12:52:39.337463 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 90/120
	I0318 12:52:40.339619 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 91/120
	I0318 12:52:41.341049 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 92/120
	I0318 12:52:42.342270 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 93/120
	I0318 12:52:43.343689 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 94/120
	I0318 12:52:44.345534 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 95/120
	I0318 12:52:45.346773 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 96/120
	I0318 12:52:46.348022 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 97/120
	I0318 12:52:47.349504 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 98/120
	I0318 12:52:48.350875 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 99/120
	I0318 12:52:49.353337 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 100/120
	I0318 12:52:50.354789 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 101/120
	I0318 12:52:51.356313 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 102/120
	I0318 12:52:52.357610 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 103/120
	I0318 12:52:53.359074 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 104/120
	I0318 12:52:54.361074 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 105/120
	I0318 12:52:55.362933 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 106/120
	I0318 12:52:56.364505 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 107/120
	I0318 12:52:57.366177 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 108/120
	I0318 12:52:58.367555 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 109/120
	I0318 12:52:59.369485 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 110/120
	I0318 12:53:00.370878 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 111/120
	I0318 12:53:01.372436 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 112/120
	I0318 12:53:02.373739 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 113/120
	I0318 12:53:03.375218 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 114/120
	I0318 12:53:04.376827 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 115/120
	I0318 12:53:05.378102 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 116/120
	I0318 12:53:06.379328 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 117/120
	I0318 12:53:07.381060 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 118/120
	I0318 12:53:08.382427 1131058 main.go:141] libmachine: (ha-328109-m03) Waiting for machine to stop 119/120
	I0318 12:53:09.382949 1131058 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 12:53:09.383021 1131058 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 12:53:09.385141 1131058 out.go:177] 
	W0318 12:53:09.386620 1131058 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 12:53:09.386636 1131058 out.go:239] * 
	* 
	W0318 12:53:09.391303 1131058 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 12:53:09.392964 1131058 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-328109 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-328109 --wait=true -v=7 --alsologtostderr
E0318 12:54:30.300107 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:56:24.905760 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-328109 --wait=true -v=7 --alsologtostderr: (4m22.042336945s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-328109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-328109 -n ha-328109
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-328109 logs -n 25: (2.218815232s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m02:/home/docker/cp-test_ha-328109-m03_ha-328109-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m02 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m03_ha-328109-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04:/home/docker/cp-test_ha-328109-m03_ha-328109-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m04 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m03_ha-328109-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp testdata/cp-test.txt                                                | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1988805859/001/cp-test_ha-328109-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109:/home/docker/cp-test_ha-328109-m04_ha-328109.txt                       |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109 sudo cat                                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109.txt                                 |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m02:/home/docker/cp-test_ha-328109-m04_ha-328109-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m02 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03:/home/docker/cp-test_ha-328109-m04_ha-328109-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m03 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-328109 node stop m02 -v=7                                                     | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-328109 node start m02 -v=7                                                    | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-328109 -v=7                                                           | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-328109 -v=7                                                                | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-328109 --wait=true -v=7                                                    | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:53 UTC | 18 Mar 24 12:57 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-328109                                                                | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:57 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:53:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:53:09.455626 1131437 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:53:09.455754 1131437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:53:09.455763 1131437 out.go:304] Setting ErrFile to fd 2...
	I0318 12:53:09.455768 1131437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:53:09.455932 1131437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:53:09.456505 1131437 out.go:298] Setting JSON to false
	I0318 12:53:09.457502 1131437 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":16536,"bootTime":1710749853,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:53:09.457562 1131437 start.go:139] virtualization: kvm guest
	I0318 12:53:09.460198 1131437 out.go:177] * [ha-328109] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:53:09.461706 1131437 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 12:53:09.463291 1131437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:53:09.461730 1131437 notify.go:220] Checking for updates...
	I0318 12:53:09.466076 1131437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:53:09.467454 1131437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:53:09.468769 1131437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 12:53:09.470027 1131437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:53:09.471895 1131437 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:53:09.472040 1131437 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:53:09.472450 1131437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:53:09.472524 1131437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:53:09.494619 1131437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I0318 12:53:09.495071 1131437 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:53:09.495708 1131437 main.go:141] libmachine: Using API Version  1
	I0318 12:53:09.495731 1131437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:53:09.496100 1131437 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:53:09.496317 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:53:09.531013 1131437 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 12:53:09.532476 1131437 start.go:297] selected driver: kvm2
	I0318 12:53:09.532497 1131437 start.go:901] validating driver "kvm2" against &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:53:09.532677 1131437 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:53:09.533076 1131437 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:53:09.533173 1131437 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:53:09.547948 1131437 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:53:09.548664 1131437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:53:09.548749 1131437 cni.go:84] Creating CNI manager for ""
	I0318 12:53:09.548785 1131437 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 12:53:09.548856 1131437 start.go:340] cluster config:
	{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:53:09.548976 1131437 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:53:09.550863 1131437 out.go:177] * Starting "ha-328109" primary control-plane node in "ha-328109" cluster
	I0318 12:53:09.552228 1131437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:53:09.552275 1131437 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 12:53:09.552287 1131437 cache.go:56] Caching tarball of preloaded images
	I0318 12:53:09.552403 1131437 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 12:53:09.552416 1131437 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 12:53:09.552551 1131437 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:53:09.552757 1131437 start.go:360] acquireMachinesLock for ha-328109: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:53:09.552797 1131437 start.go:364] duration metric: took 22.129µs to acquireMachinesLock for "ha-328109"
	I0318 12:53:09.552811 1131437 start.go:96] Skipping create...Using existing machine configuration
	I0318 12:53:09.552819 1131437 fix.go:54] fixHost starting: 
	I0318 12:53:09.553112 1131437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:53:09.553151 1131437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:53:09.566551 1131437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40667
	I0318 12:53:09.566979 1131437 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:53:09.567472 1131437 main.go:141] libmachine: Using API Version  1
	I0318 12:53:09.567496 1131437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:53:09.567825 1131437 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:53:09.568032 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:53:09.568191 1131437 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:53:09.569595 1131437 fix.go:112] recreateIfNeeded on ha-328109: state=Running err=<nil>
	W0318 12:53:09.569613 1131437 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 12:53:09.571595 1131437 out.go:177] * Updating the running kvm2 "ha-328109" VM ...
	I0318 12:53:09.573014 1131437 machine.go:94] provisionDockerMachine start ...
	I0318 12:53:09.573036 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:53:09.573236 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:09.575790 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.576291 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:09.576345 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.576487 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:53:09.576703 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.576870 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.577034 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:53:09.577188 1131437 main.go:141] libmachine: Using SSH client type: native
	I0318 12:53:09.577387 1131437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:53:09.577397 1131437 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 12:53:09.686081 1131437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-328109
	
	I0318 12:53:09.686108 1131437 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:53:09.686351 1131437 buildroot.go:166] provisioning hostname "ha-328109"
	I0318 12:53:09.686375 1131437 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:53:09.686561 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:09.689129 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.689549 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:09.689575 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.689755 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:53:09.689953 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.690117 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.690258 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:53:09.690451 1131437 main.go:141] libmachine: Using SSH client type: native
	I0318 12:53:09.690650 1131437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:53:09.690667 1131437 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-328109 && echo "ha-328109" | sudo tee /etc/hostname
	I0318 12:53:09.820224 1131437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-328109
	
	I0318 12:53:09.820258 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:09.822679 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.823096 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:09.823126 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.823272 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:53:09.823454 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.823623 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.823758 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:53:09.823928 1131437 main.go:141] libmachine: Using SSH client type: native
	I0318 12:53:09.824094 1131437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:53:09.824109 1131437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-328109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-328109/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-328109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:53:09.930605 1131437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:53:09.930637 1131437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 12:53:09.930658 1131437 buildroot.go:174] setting up certificates
	I0318 12:53:09.930667 1131437 provision.go:84] configureAuth start
	I0318 12:53:09.930677 1131437 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:53:09.930962 1131437 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:53:09.933844 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.934226 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:09.934255 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.934387 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:09.936594 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.936941 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:09.936974 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.937076 1131437 provision.go:143] copyHostCerts
	I0318 12:53:09.937118 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:53:09.937154 1131437 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 12:53:09.937165 1131437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:53:09.937231 1131437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 12:53:09.937300 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:53:09.937324 1131437 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 12:53:09.937336 1131437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:53:09.937363 1131437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 12:53:09.937412 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:53:09.937428 1131437 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 12:53:09.937434 1131437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:53:09.937454 1131437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 12:53:09.937497 1131437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.ha-328109 san=[127.0.0.1 192.168.39.253 ha-328109 localhost minikube]
	I0318 12:53:10.042323 1131437 provision.go:177] copyRemoteCerts
	I0318 12:53:10.042400 1131437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:53:10.042426 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:10.044882 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:10.045334 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:10.045363 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:10.045600 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:53:10.045891 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:10.046093 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:53:10.046245 1131437 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:53:10.132669 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 12:53:10.132763 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:53:10.167965 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 12:53:10.168038 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 12:53:10.203920 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 12:53:10.203985 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 12:53:10.234682 1131437 provision.go:87] duration metric: took 304.0003ms to configureAuth
	I0318 12:53:10.234716 1131437 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:53:10.234998 1131437 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:53:10.235124 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:10.237631 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:10.238024 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:10.238049 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:10.238178 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:53:10.238361 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:10.238504 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:10.238625 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:53:10.238793 1131437 main.go:141] libmachine: Using SSH client type: native
	I0318 12:53:10.238966 1131437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:53:10.238987 1131437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 12:54:41.248658 1131437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 12:54:41.248696 1131437 machine.go:97] duration metric: took 1m31.675664373s to provisionDockerMachine
	I0318 12:54:41.248713 1131437 start.go:293] postStartSetup for "ha-328109" (driver="kvm2")
	I0318 12:54:41.248725 1131437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:54:41.248744 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.249146 1131437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:54:41.249190 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:54:41.252456 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.252926 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.252950 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.253085 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:54:41.253285 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.253473 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:54:41.253622 1131437 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:54:41.336717 1131437 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:54:41.341914 1131437 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:54:41.341957 1131437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 12:54:41.342049 1131437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 12:54:41.342138 1131437 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 12:54:41.342149 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /etc/ssl/certs/11141362.pem
	I0318 12:54:41.342292 1131437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:54:41.353634 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:54:41.380795 1131437 start.go:296] duration metric: took 132.063863ms for postStartSetup
	I0318 12:54:41.380853 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.381178 1131437 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0318 12:54:41.381206 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:54:41.383999 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.384377 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.384405 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.384537 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:54:41.384750 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.384947 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:54:41.385105 1131437 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	W0318 12:54:41.468533 1131437 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0318 12:54:41.468562 1131437 fix.go:56] duration metric: took 1m31.915743876s for fixHost
	I0318 12:54:41.468608 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:54:41.471380 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.471769 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.471799 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.472007 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:54:41.472222 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.472433 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.472552 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:54:41.472721 1131437 main.go:141] libmachine: Using SSH client type: native
	I0318 12:54:41.472961 1131437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:54:41.472976 1131437 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:54:41.577873 1131437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710766481.545492902
	
	I0318 12:54:41.577902 1131437 fix.go:216] guest clock: 1710766481.545492902
	I0318 12:54:41.577912 1131437 fix.go:229] Guest: 2024-03-18 12:54:41.545492902 +0000 UTC Remote: 2024-03-18 12:54:41.468591753 +0000 UTC m=+92.063283113 (delta=76.901149ms)
	I0318 12:54:41.577934 1131437 fix.go:200] guest clock delta is within tolerance: 76.901149ms
	I0318 12:54:41.577939 1131437 start.go:83] releasing machines lock for "ha-328109", held for 1m32.025133507s
	I0318 12:54:41.577994 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.578319 1131437 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:54:41.580943 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.581338 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.581378 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.581503 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.582117 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.582302 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.582400 1131437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:54:41.582448 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:54:41.582529 1131437 ssh_runner.go:195] Run: cat /version.json
	I0318 12:54:41.582547 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:54:41.584944 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.585287 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.585342 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.585411 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:54:41.585422 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.585579 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.585738 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:54:41.585884 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.585915 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.585916 1131437 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:54:41.586052 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:54:41.586204 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.586384 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:54:41.586562 1131437 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:54:41.662255 1131437 ssh_runner.go:195] Run: systemctl --version
	I0318 12:54:41.691308 1131437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 12:54:41.863307 1131437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 12:54:41.872558 1131437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:54:41.872643 1131437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:54:41.882670 1131437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 12:54:41.882692 1131437 start.go:494] detecting cgroup driver to use...
	I0318 12:54:41.882750 1131437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:54:41.899679 1131437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:54:41.914952 1131437 docker.go:217] disabling cri-docker service (if available) ...
	I0318 12:54:41.915030 1131437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 12:54:41.930037 1131437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 12:54:41.945579 1131437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 12:54:42.101881 1131437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 12:54:42.264933 1131437 docker.go:233] disabling docker service ...
	I0318 12:54:42.265001 1131437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 12:54:42.283512 1131437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 12:54:42.297842 1131437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 12:54:42.458746 1131437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 12:54:42.613087 1131437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 12:54:42.627693 1131437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:54:42.650018 1131437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 12:54:42.650087 1131437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:54:42.661645 1131437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 12:54:42.661714 1131437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:54:42.673195 1131437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:54:42.684966 1131437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:54:42.696709 1131437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:54:42.708955 1131437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:54:42.719257 1131437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:54:42.729549 1131437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:54:42.878150 1131437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 12:54:51.483164 1131437 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.604954948s)
	I0318 12:54:51.483205 1131437 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 12:54:51.483263 1131437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 12:54:51.489699 1131437 start.go:562] Will wait 60s for crictl version
	I0318 12:54:51.489748 1131437 ssh_runner.go:195] Run: which crictl
	I0318 12:54:51.494036 1131437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:54:51.541634 1131437 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 12:54:51.541719 1131437 ssh_runner.go:195] Run: crio --version
	I0318 12:54:51.578631 1131437 ssh_runner.go:195] Run: crio --version
	I0318 12:54:51.613758 1131437 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 12:54:51.615160 1131437 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:54:51.617668 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:51.617992 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:51.618021 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:51.618174 1131437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 12:54:51.623657 1131437 kubeadm.go:877] updating cluster {Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 12:54:51.623802 1131437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:54:51.623847 1131437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:54:51.671851 1131437 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 12:54:51.671873 1131437 crio.go:415] Images already preloaded, skipping extraction
	I0318 12:54:51.671933 1131437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:54:51.708378 1131437 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 12:54:51.708405 1131437 cache_images.go:84] Images are preloaded, skipping loading
	I0318 12:54:51.708414 1131437 kubeadm.go:928] updating node { 192.168.39.253 8443 v1.28.4 crio true true} ...
	I0318 12:54:51.708548 1131437 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-328109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:54:51.708634 1131437 ssh_runner.go:195] Run: crio config
	I0318 12:54:51.765085 1131437 cni.go:84] Creating CNI manager for ""
	I0318 12:54:51.765111 1131437 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 12:54:51.765123 1131437 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 12:54:51.765147 1131437 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.253 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-328109 NodeName:ha-328109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 12:54:51.765352 1131437 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-328109"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 12:54:51.765376 1131437 kube-vip.go:111] generating kube-vip config ...
	I0318 12:54:51.765438 1131437 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 12:54:51.779130 1131437 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 12:54:51.779287 1131437 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 12:54:51.779387 1131437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:54:51.790217 1131437 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 12:54:51.790288 1131437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 12:54:51.800654 1131437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0318 12:54:51.819725 1131437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:54:51.838314 1131437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0318 12:54:51.857126 1131437 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 12:54:51.877808 1131437 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 12:54:51.882286 1131437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:54:52.036553 1131437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:54:52.052829 1131437 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109 for IP: 192.168.39.253
	I0318 12:54:52.052868 1131437 certs.go:194] generating shared ca certs ...
	I0318 12:54:52.052892 1131437 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:54:52.053111 1131437 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 12:54:52.053161 1131437 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 12:54:52.053171 1131437 certs.go:256] generating profile certs ...
	I0318 12:54:52.053251 1131437 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key
	I0318 12:54:52.053278 1131437 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.8c23f119
	I0318 12:54:52.053298 1131437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.8c23f119 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.253 192.168.39.246 192.168.39.241 192.168.39.254]
	I0318 12:54:52.207972 1131437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.8c23f119 ...
	I0318 12:54:52.208012 1131437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.8c23f119: {Name:mkbd66155d7290e4053cdbaf559cad07c945947d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:54:52.208206 1131437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.8c23f119 ...
	I0318 12:54:52.208219 1131437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.8c23f119: {Name:mk22b0e81237fd60af1980ed17fc7999742b869d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:54:52.208286 1131437 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.8c23f119 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt
	I0318 12:54:52.208525 1131437 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.8c23f119 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key
	I0318 12:54:52.208675 1131437 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key
	I0318 12:54:52.208692 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:54:52.208704 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:54:52.208723 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:54:52.208733 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:54:52.208748 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:54:52.208760 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:54:52.208774 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:54:52.208784 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:54:52.208840 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 12:54:52.208878 1131437 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 12:54:52.208888 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 12:54:52.208919 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 12:54:52.208942 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 12:54:52.208967 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 12:54:52.209001 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:54:52.209030 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem -> /usr/share/ca-certificates/1114136.pem
	I0318 12:54:52.209044 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /usr/share/ca-certificates/11141362.pem
	I0318 12:54:52.209056 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:54:52.209933 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:54:52.287577 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:54:52.329714 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:54:52.356968 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:54:52.388896 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 12:54:52.416373 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 12:54:52.444943 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:54:52.476929 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 12:54:52.511334 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 12:54:52.538242 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 12:54:52.564846 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:54:52.604622 1131437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 12:54:52.639201 1131437 ssh_runner.go:195] Run: openssl version
	I0318 12:54:52.646185 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 12:54:52.662103 1131437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 12:54:52.667308 1131437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 12:54:52.667416 1131437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 12:54:52.673670 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:54:52.687024 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:54:52.699801 1131437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:54:52.705106 1131437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:54:52.705156 1131437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:54:52.711744 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:54:52.723263 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 12:54:52.736043 1131437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 12:54:52.740964 1131437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 12:54:52.741004 1131437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 12:54:52.747018 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 12:54:52.757419 1131437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:54:52.762166 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 12:54:52.768182 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 12:54:52.774288 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 12:54:52.780310 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 12:54:52.790899 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 12:54:52.797096 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 12:54:52.803146 1131437 kubeadm.go:391] StartCluster: {Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:54:52.803310 1131437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 12:54:52.803385 1131437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 12:54:52.851910 1131437 cri.go:89] found id: "1a189929210d919728381e96a6d9318a1e316fa85c27ebef21233d0d47127af6"
	I0318 12:54:52.851939 1131437 cri.go:89] found id: "dde4e0e7b3b10dff8ab01deb49351214d58d396e82ba052fdb95f8eab71407ae"
	I0318 12:54:52.851944 1131437 cri.go:89] found id: "575d4d72c34ad10849775cbb98cf1577b733b01400518921cbab3e061da5b2cd"
	I0318 12:54:52.851955 1131437 cri.go:89] found id: "5ab2478c4da6ad7b5451bbe4902eef614054446a19eb7b3d8d3c785dbeb01621"
	I0318 12:54:52.851959 1131437 cri.go:89] found id: "0b630b0fc05d4dd89718593f42880e41e071014b4d0f87791cba4fbf8cbe8785"
	I0318 12:54:52.851964 1131437 cri.go:89] found id: "742842736e1b52735c8a18b3d61ed7ee1d6157f2ca03ec317995f36597c45ac6"
	I0318 12:54:52.851968 1131437 cri.go:89] found id: "82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135"
	I0318 12:54:52.851971 1131437 cri.go:89] found id: "f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562"
	I0318 12:54:52.851975 1131437 cri.go:89] found id: "f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6"
	I0318 12:54:52.851982 1131437 cri.go:89] found id: "55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205"
	I0318 12:54:52.851989 1131437 cri.go:89] found id: "de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6"
	I0318 12:54:52.851992 1131437 cri.go:89] found id: "a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef"
	I0318 12:54:52.851996 1131437 cri.go:89] found id: "7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a"
	I0318 12:54:52.851999 1131437 cri.go:89] found id: ""
	I0318 12:54:52.852054 1131437 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.441770230Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:7e5b7e3fd47f4bbe14e7d94f794829b1574c8f826780e0c85d2bd0bd0088b1e0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=7715545a-ba44-47a3-9c20-cfbcaf448ee2 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.442013889Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:7e5b7e3fd47f4bbe14e7d94f794829b1574c8f826780e0c85d2bd0bd0088b1e0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1710766543306706289,StartedAt:1710766543375862021,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/f004a20401b95f693a90cc8d0b7e8acc/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/f004a20401b95f693a90cc8d0b7e8acc/containers/kube-controller-manager/f718c87d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]
*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-ha-328109_f004a20401b95f693a90cc8d0b7e8acc/kube-controller-manager/2.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*
HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=7715545a-ba44-47a3-9c20-cfbcaf448ee2 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.443569301Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:deb997db5453b48f305b99734da3ba8a7fab972de98530d49064bdac432e8a08,Verbose:false,}" file="otel-collector/interceptors.go:62" id=82ffdc5f-5208-4b0a-aeec-3257ce566779 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.446530806Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:deb997db5453b48f305b99734da3ba8a7fab972de98530d49064bdac432e8a08,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},State:CONTAINER_RUNNING,CreatedAt:1710766542247907983,StartedAt:1710766542304299788,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0919befc6ed870de46dfd820b38f0ac2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0919befc6ed870de46dfd820b38f0ac2/containers/kube-apiserver/01a73c11,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib
/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-apiserver-ha-328109_0919befc6ed870de46dfd820b38f0ac2/kube-apiserver/3.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:256,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=82ffdc5f-5208-4b0a-aeec-3257ce566779 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.447750705Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:262b1f5b882c6f972233f82ea6c8c37741119410b426deccb897e2c2ddef5bae,Verbose:false,}" file="otel-collector/interceptors.go:62" id=031fd333-390f-4163-9d7b-8dfa7bcd765b name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.449469462Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:262b1f5b882c6f972233f82ea6c8c37741119410b426deccb897e2c2ddef5bae,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710766532615188131,StartedAt:1710766532643806440,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox:1.28,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.term
inationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/5a0215bb-df62-44b9-9d60-d45778880b8b/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/5a0215bb-df62-44b9-9d60-d45778880b8b/containers/busybox/d6ec8cfb,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/5a0215bb-df62-44b9-9d60-d45778880b8b/volumes/kubernetes.io~projected/kube-api-access-4twgx,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/default_busybox-5b5d89c9d6-fz4kl_5a0215bb-df62-44b9-9d60-d45778880b8b/busybox/1.log,Resources:&Contain
erResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=031fd333-390f-4163-9d7b-8dfa7bcd765b name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.450223224Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:bde47888a6d380a6b21e060a28901be678b90bd9441d281802531d5d40ae7090,Verbose:false,}" file="otel-collector/interceptors.go:62" id=1f1c195b-88c2-44d8-8cde-d66788039ca6 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.450367554Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:bde47888a6d380a6b21e060a28901be678b90bd9441d281802531d5d40ae7090,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},State:CONTAINER_RUNNING,CreatedAt:1710766500665711553,StartedAt:1710766500733960377,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip:v0.7.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminat
ionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/c9ffc89cd42ea8da4e6070b43e0ace35/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/c9ffc89cd42ea8da4e6070b43e0ace35/containers/kube-vip/c01d4492,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/admin.conf,HostPath:/etc/kubernetes/admin.conf,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-vip-ha-328109_c9ffc89cd42ea8da4e6070b43e0ace35/kube-vip/3.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000
,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1f1c195b-88c2-44d8-8cde-d66788039ca6 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.451189122Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:a736d02ea6c00f9dba7ff099370fb79b5a0a10daa5881ea70f382f4ef3b8777c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=5beeb8b2-0e68-4361-a632-7c834399b4e7 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.451414177Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:a736d02ea6c00f9dba7ff099370fb79b5a0a10daa5881ea70f382f4ef3b8777c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710766499710213357,StartedAt:1710766500008942087,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-proxy:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.
terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/run/xtables.lock,HostPath:/run/xtables.lock,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/lib/modules,HostPath:/lib/modules,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/afb0afad-2b88-4abb-9039-aaf9c64ad920/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/afb0afad-2b88-4abb-9039-aaf9c64ad920/containers/kube-proxy/ca3e26b2,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/kube-proxy,HostPath:/var/lib/kub
elet/pods/afb0afad-2b88-4abb-9039-aaf9c64ad920/volumes/kubernetes.io~configmap/kube-proxy,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/afb0afad-2b88-4abb-9039-aaf9c64ad920/volumes/kubernetes.io~projected/kube-api-access-ctnc7,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-proxy-dhz88_afb0afad-2b88-4abb-9039-aaf9c64ad920/kube-proxy/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collecto
r/interceptors.go:74" id=5beeb8b2-0e68-4361-a632-7c834399b4e7 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.451981913Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:6f6eaed81eb434344922005711291d3960965a5b6f4210f844c7981f0c1f817c,Verbose:false,}" file="otel-collector/interceptors.go:62" id=60589a3b-f339-4204-81d1-f62fa67ecc40 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.452435472Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6f6eaed81eb434344922005711291d3960965a5b6f4210f844c7981f0c1f817c,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710766499677338658,StartedAt:1710766499831290201,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"container
Port\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/7c1159dc-6545-41a6-bb4a-75fdab519c9e/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7c1159dc-6545-41a6-bb4a-75fdab519c9e/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7c1159dc-6545-41a6-bb4a-75fdab519c9e/containers/coredns/ef8a509d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAG
ATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/7c1159dc-6545-41a6-bb4a-75fdab519c9e/volumes/kubernetes.io~projected/kube-api-access-bvlcr,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-5dd5756b68-c78nc_7c1159dc-6545-41a6-bb4a-75fdab519c9e/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=60589a3b-f339-4204-81d1-f62fa67ecc40 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.457514957Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:4506a10fb16676d5321e24ca5eda8224cf5e32e096f326a2342ce49837ab2985,Verbose:false,}" file="otel-collector/interceptors.go:62" id=28820f3a-8be4-408b-8a85-515a7ffae8b3 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.457792644Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:4506a10fb16676d5321e24ca5eda8224cf5e32e096f326a2342ce49837ab2985,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710766499554206425,StartedAt:1710766499836880322,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.28.4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kuber
netes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/90e740be10e7ccb198e1e310b9749e68/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/90e740be10e7ccb198e1e310b9749e68/containers/kube-scheduler/1131cf5b,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-ha-328109_90e740be10e7ccb198e1e310b9749e68/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,
CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=28820f3a-8be4-408b-8a85-515a7ffae8b3 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.461315652Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:fc22b73970f8bb427a03d4e37b8a268623ff9be743eec8bbda4c734eecadba72,Verbose:false,}" file="otel-collector/interceptors.go:62" id=04cd7a2b-fe58-42f2-a0cb-4dbdffdcf7b2 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.461656118Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:fc22b73970f8bb427a03d4e37b8a268623ff9be743eec8bbda4c734eecadba72,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710766499553365372,StartedAt:1710766499768201619,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.10.1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"container
Port\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/9a865f86-96cf-4687-9283-d2ebe5616d1a/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/9a865f86-96cf-4687-9283-d2ebe5616d1a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/9a865f86-96cf-4687-9283-d2ebe5616d1a/containers/coredns/8bb22d6e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAG
ATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/9a865f86-96cf-4687-9283-d2ebe5616d1a/volumes/kubernetes.io~projected/kube-api-access-sxnbr,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_coredns-5dd5756b68-p5xgj_9a865f86-96cf-4687-9283-d2ebe5616d1a/coredns/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:178257920,OomScoreAdj:967,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:178257920,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=04cd7a2b-fe58-42f2-a0cb-4dbdffdcf7b2 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.462386006Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:999a93802a1030f438fcc2bf9271cbe9c919c4d10f93ba808cc05297ef9001ec,Verbose:false,}" file="otel-collector/interceptors.go:62" id=c33224a4-fe2f-420d-b414-2a595ff923f7 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.462603443Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:999a93802a1030f438fcc2bf9271cbe9c919c4d10f93ba808cc05297ef9001ec,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710766499347889809,StartedAt:1710766499731164502,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.9-0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy
: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/9e31f7b77f2cd8547e7aa12e86f29a80/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/9e31f7b77f2cd8547e7aa12e86f29a80/containers/etcd/33f02bde,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_etcd-ha-328109_9e31f7
b77f2cd8547e7aa12e86f29a80/etcd/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c33224a4-fe2f-420d-b414-2a595ff923f7 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.535588623Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=42a4c5f5-944d-421f-905c-0563413e4ee5 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.536183491Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=42a4c5f5-944d-421f-905c-0563413e4ee5 name=/runtime.v1.RuntimeService/Version
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.538187371Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4a3205f8-f360-41e1-8d77-7d4176ad512c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.538640538Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766652538619702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4a3205f8-f360-41e1-8d77-7d4176ad512c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.539281058Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c16ad5ec-5f01-44ca-be08-33d05f419db6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.539336028Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c16ad5ec-5f01-44ca-be08-33d05f419db6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 12:57:32 ha-328109 crio[3851]: time="2024-03-18 12:57:32.539743956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea108d58a3b234c0a0aa9835ddc50f9701d8677f0f71cf4a0b341c8408bdc220,PodSandboxId:9528776ae09e3d86c54b708cae447fa7ecddd1f1e9936f355919903ccce52807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710766578192740100,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0852ff1b3a7566b792fe5630abc27f79ebadf267fb8c81bc86dd84a71da2c11d,PodSandboxId:36a87ec33cb1a5df3431b0f399ed41ef482fbd8d27ced950186cfa118964465d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710766558199627731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5b7e3fd47f4bbe14e7d94f794829b1574c8f826780e0c85d2bd0bd0088b1e0,PodSandboxId:37c1ac272d35c69afe5c8ab7f748f3a1d9310cc8eef1ce6398d6be4ab0f8b1cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710766543207774604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb997db5453b48f305b99734da3ba8a7fab972de98530d49064bdac432e8a08,PodSandboxId:ea022fd052d1562a0b5415522b6442f1fc4adb6a946b91b9a1b97d654f7c1245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710766542189594357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ba73af3c0deb5fd36b57c17ac2ffa8d7cf075f05f25095bd3e2b9562928ad,PodSandboxId:9528776ae09e3d86c54b708cae447fa7ecddd1f1e9936f355919903ccce52807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710766535201288625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262b1f5b882c6f972233f82ea6c8c37741119410b426deccb897e2c2ddef5bae,PodSandboxId:f57e4a0707ff2dfc3ab957755d401ec1a4e0bbbb7d59517a3f4fe4601d7d5ef8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710766532572756133,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde47888a6d380a6b21e060a28901be678b90bd9441d281802531d5d40ae7090,PodSandboxId:cf59423d7268db021441ccd23b8b2036b0ac62116b7b1a4d758ce0b602386af9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710766499797614520,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a736d02ea6c00f9dba7ff099370fb79b5a0a10daa5881ea70f382f4ef3b8777c,PodSandboxId:f90d0e204275b03bf497101219d5714c78a4b431332dfae63f2e69c096c794da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710766499124671144,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f6eaed81
eb434344922005711291d3960965a5b6f4210f844c7981f0c1f817c,PodSandboxId:5e6ea275123b34fed36cd49b3e2cde6def832872312bd46730db68ed6a2508ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710766499485293460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4506a10fb16676d5321e24ca5eda8224cf5e32e096f326a2342ce49837ab2985,PodSandboxId:3cdd9461a9c1e7d693b563c139aba05dca5a597a66da89de1c27792c0daf86ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710766499349536355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc22b73970f8bb427a03d4e37b8a268623ff9be743eec8bbda4c734eecadba72,PodSandboxId:df6e30eed668e970a9b759629e41489911069ac3b081f6040020882c07f9b027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710766499416993376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d47c0073f4496a95a53f50e76c4c998777cfc82273e6437f07dcc8b326b896,PodSandboxId:ea022fd052d1562a0b5415522b6442f1fc4adb6a946b91b9a1b97d654f7c1245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710766499063447609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beca5c009540cfc74a33264d724cef9b109ee455809e11c09a8c296225794f65,PodSandboxId:37c1ac272d35c69afe5c8ab7f748f3a1d9310cc8eef1ce6398d6be4ab0f8b1cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710766499065238664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999a93802a1030f438fcc2bf9271cbe9c919c4d10f93ba808cc05297ef9001ec,PodSandboxId:0309587cbe9bf9210f7ea34a08933b0df48ed944ce2c7b522048372147f3aa89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710766499024633400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Anno
tations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cddb56f3c76f9dc0a6c993034b68480caf7493fa7a13c6edc72f2dc5289ba517,PodSandboxId:36a87ec33cb1a5df3431b0f399ed41ef482fbd8d27ced950186cfa118964465d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710766492596704153,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubern
etes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575d4d72c34ad10849775cbb98cf1577b733b01400518921cbab3e061da5b2cd,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710766292205383311,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710765998607477170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710765818122667843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710765818092289225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710765812830565830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1710765791394325942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710765791364285455,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c16ad5ec-5f01-44ca-be08-33d05f419db6 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ea108d58a3b23       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   9528776ae09e3       storage-provisioner
	0852ff1b3a756       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   36a87ec33cb1a       kindnet-vnv5b
	7e5b7e3fd47f4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   2                   37c1ac272d35c       kube-controller-manager-ha-328109
	deb997db5453b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            3                   ea022fd052d15       kube-apiserver-ha-328109
	1e9ba73af3c0d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Exited              storage-provisioner       3                   9528776ae09e3       storage-provisioner
	262b1f5b882c6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago        Running             busybox                   1                   f57e4a0707ff2       busybox-5b5d89c9d6-fz4kl
	bde47888a6d38       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  3                   cf59423d7268d       kube-vip-ha-328109
	6f6eaed81eb43       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   5e6ea275123b3       coredns-5dd5756b68-c78nc
	fc22b73970f8b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   df6e30eed668e       coredns-5dd5756b68-p5xgj
	4506a10fb1667       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            1                   3cdd9461a9c1e       kube-scheduler-ha-328109
	a736d02ea6c00       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      2 minutes ago        Running             kube-proxy                1                   f90d0e204275b       kube-proxy-dhz88
	beca5c009540c       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Exited              kube-controller-manager   1                   37c1ac272d35c       kube-controller-manager-ha-328109
	01d47c0073f44       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Exited              kube-apiserver            2                   ea022fd052d15       kube-apiserver-ha-328109
	999a93802a103       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      1                   0309587cbe9bf       etcd-ha-328109
	cddb56f3c76f9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   36a87ec33cb1a       kindnet-vnv5b
	575d4d72c34ad       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      6 minutes ago        Exited              kube-vip                  2                   2f84d6cd36a0e       kube-vip-ha-328109
	c5b3318798546       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   10b35c5d18ac5       busybox-5b5d89c9d6-fz4kl
	82a8d2ac6a60c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   b487ae421169c       coredns-5dd5756b68-p5xgj
	f2c5cd4a72423       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   16503713d1986       coredns-5dd5756b68-c78nc
	f8d915a384e6a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago       Exited              kube-proxy                0                   35275a602be1c       kube-proxy-dhz88
	55e393cf77a1b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      14 minutes ago       Exited              etcd                      0                   8231d33571b5e       etcd-ha-328109
	de552ed42d495       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      14 minutes ago       Exited              kube-scheduler            0                   8cfa0459c6e2a       kube-scheduler-ha-328109
	
	
	==> coredns [6f6eaed81eb434344922005711291d3960965a5b6f4210f844c7981f0c1f817c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55799 - 41388 "HINFO IN 6639687177075769404.563070019675184471. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020507334s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:53548->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135] <==
	[INFO] 10.244.0.4:56631 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152161s
	[INFO] 10.244.0.4:45190 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103827s
	[INFO] 10.244.2.2:34185 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105521s
	[INFO] 10.244.2.2:44888 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000730863s
	[INFO] 10.244.1.2:40647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166359s
	[INFO] 10.244.1.2:57968 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001882507s
	[INFO] 10.244.1.2:55297 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096788s
	[INFO] 10.244.1.2:36989 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088322s
	[INFO] 10.244.1.2:37677 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205894s
	[INFO] 10.244.1.2:32814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074605s
	[INFO] 10.244.1.2:44489 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102528s
	[INFO] 10.244.0.4:53607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206955s
	[INFO] 10.244.2.2:47974 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000313502s
	[INFO] 10.244.1.2:49641 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193514s
	[INFO] 10.244.1.2:52193 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126417s
	[INFO] 10.244.1.2:55887 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104434s
	[INFO] 10.244.0.4:43288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014747s
	[INFO] 10.244.0.4:57574 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192178s
	[INFO] 10.244.0.4:58440 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128408s
	[INFO] 10.244.2.2:50297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168343s
	[INFO] 10.244.2.2:37188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133774s
	[INFO] 10.244.1.2:33883 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095091s
	[INFO] 10.244.1.2:45785 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123693s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562] <==
	[INFO] 10.244.0.4:54630 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003946825s
	[INFO] 10.244.0.4:37807 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185941s
	[INFO] 10.244.0.4:54881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000227886s
	[INFO] 10.244.2.2:43048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261065s
	[INFO] 10.244.2.2:43023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001686526s
	[INFO] 10.244.2.2:59097 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204051s
	[INFO] 10.244.2.2:49621 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000262805s
	[INFO] 10.244.2.2:48119 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371219s
	[INFO] 10.244.2.2:49912 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148592s
	[INFO] 10.244.1.2:60652 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0016374s
	[INFO] 10.244.0.4:55891 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079534s
	[INFO] 10.244.0.4:53025 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231262s
	[INFO] 10.244.0.4:39659 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116818s
	[INFO] 10.244.2.2:48403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125802s
	[INFO] 10.244.2.2:42106 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092079s
	[INFO] 10.244.2.2:41088 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000204572s
	[INFO] 10.244.1.2:60379 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108875s
	[INFO] 10.244.0.4:42381 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008263s
	[INFO] 10.244.2.2:47207 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000237181s
	[INFO] 10.244.2.2:44002 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102925s
	[INFO] 10.244.1.2:54332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126486s
	[INFO] 10.244.1.2:38590 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000245357s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1935&timeout=7m20s&timeoutSeconds=440&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [fc22b73970f8bb427a03d4e37b8a268623ff9be743eec8bbda4c734eecadba72] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36261 - 4647 "HINFO IN 2596574517611928040.3468582005389048924. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.066906463s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> describe nodes <==
	Name:               ha-328109
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_43_22_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:43:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:57:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:55:54 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:55:54 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:55:54 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:55:54 +0000   Mon, 18 Mar 2024 12:43:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-328109
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8b3a9b95f2141b891e3cee14aaad62e
	  System UUID:                a8b3a9b9-5f21-41b8-91e3-cee14aaad62e
	  Boot ID:                    906b8684-634a-4838-bb8e-d090694f9649
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-fz4kl             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-5dd5756b68-c78nc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5dd5756b68-p5xgj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-328109                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-vnv5b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-328109             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-328109    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-dhz88                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-328109             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-328109                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 111s                   kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-328109 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-328109 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-328109 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-328109 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-328109 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-328109 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-328109 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Warning  ContainerGCFailed        3m12s (x2 over 4m12s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           96s                    node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal   RegisteredNode           96s                    node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal   RegisteredNode           30s                    node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	
	
	Name:               ha-328109-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_44_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:44:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:57:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:56:28 +0000   Mon, 18 Mar 2024 12:55:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:56:28 +0000   Mon, 18 Mar 2024 12:55:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:56:28 +0000   Mon, 18 Mar 2024 12:55:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:56:28 +0000   Mon, 18 Mar 2024 12:55:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-328109-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 148457ca2d4c4c78bdc5b74dba85e93e
	  System UUID:                148457ca-2d4c-4c78-bdc5-b74dba85e93e
	  Boot ID:                    a393a97b-91e0-431e-93b5-6e815ca4673f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-sx4mf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-328109-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-lc74t                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-328109-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-328109-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-7zgrx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-328109-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-328109-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 85s                    kube-proxy       
	  Normal  RegisteredNode           12m                    node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  NodeNotReady             9m8s                   node-controller  Node ha-328109-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node ha-328109-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node ha-328109-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node ha-328109-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           96s                    node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  RegisteredNode           96s                    node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  RegisteredNode           30s                    node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	
	
	Name:               ha-328109-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_45_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:45:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:57:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:57:00 +0000   Mon, 18 Mar 2024 12:45:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:57:00 +0000   Mon, 18 Mar 2024 12:45:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:57:00 +0000   Mon, 18 Mar 2024 12:45:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:57:00 +0000   Mon, 18 Mar 2024 12:46:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.241
	  Hostname:    ha-328109-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 4fab87c426444aa8b3b6e0542502fa6e
	  System UUID:                4fab87c4-2644-4aa8-b3b6-e0542502fa6e
	  Boot ID:                    10be1dd9-4004-4954-a4c9-2df11a2b559e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-gv6tf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-328109-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-t2pkv                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-328109-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-328109-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-zn8dk                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-328109-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-328109-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  63s (x2 over 63s)  kubelet          Node ha-328109-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x2 over 63s)  kubelet          Node ha-328109-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x2 over 63s)  kubelet          Node ha-328109-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 63s                kubelet          Node ha-328109-m03 has been rebooted, boot id: 10be1dd9-4004-4954-a4c9-2df11a2b559e
	  Normal   RegisteredNode           30s                node-controller  Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller
	
	
	Name:               ha-328109-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_47_16_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:47:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:57:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:57:24 +0000   Mon, 18 Mar 2024 12:57:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:57:24 +0000   Mon, 18 Mar 2024 12:57:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:57:24 +0000   Mon, 18 Mar 2024 12:57:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:57:24 +0000   Mon, 18 Mar 2024 12:57:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-328109-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac08f798ce4148b48f36040f95b7eaf9
	  System UUID:                ac08f798-ce41-48b4-8f36-040f95b7eaf9
	  Boot ID:                    d839d937-93b3-471f-bd80-ed3e21d7b7e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-ggcw6       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-4fxbn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x5 over 10m)  kubelet          Node ha-328109-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x5 over 10m)  kubelet          Node ha-328109-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x5 over 10m)  kubelet          Node ha-328109-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   NodeReady                10m                kubelet          Node ha-328109-m04 status is now: NodeReady
	  Normal   RegisteredNode           96s                node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   RegisteredNode           96s                node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   NodeNotReady             56s                node-controller  Node ha-328109-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           30s                node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   Starting                 10s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  10s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeNotReady             10s                kubelet          Node ha-328109-m04 status is now: NodeNotReady
	  Warning  Rebooted                 9s (x3 over 10s)   kubelet          Node ha-328109-m04 has been rebooted, boot id: d839d937-93b3-471f-bd80-ed3e21d7b7e5
	  Normal   NodeHasSufficientMemory  9s (x4 over 10s)   kubelet          Node ha-328109-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s (x4 over 10s)   kubelet          Node ha-328109-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s (x4 over 10s)   kubelet          Node ha-328109-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                9s (x2 over 9s)    kubelet          Node ha-328109-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.058901] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058875] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.159253] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.141446] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.251865] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[Mar18 12:43] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.059542] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.985090] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +1.363754] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.738793] kauditd_printk_skb: 40 callbacks suppressed
	[  +1.856189] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[ +11.678244] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.089183] kauditd_printk_skb: 37 callbacks suppressed
	[Mar18 12:44] kauditd_printk_skb: 27 callbacks suppressed
	[Mar18 12:54] systemd-fstab-generator[3774]: Ignoring "noauto" option for root device
	[  +0.159104] systemd-fstab-generator[3786]: Ignoring "noauto" option for root device
	[  +0.192027] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +0.165625] systemd-fstab-generator[3812]: Ignoring "noauto" option for root device
	[  +0.263127] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[  +9.153822] systemd-fstab-generator[3940]: Ignoring "noauto" option for root device
	[  +0.088827] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.629321] kauditd_printk_skb: 22 callbacks suppressed
	[Mar18 12:55] kauditd_printk_skb: 83 callbacks suppressed
	[ +29.962164] kauditd_printk_skb: 5 callbacks suppressed
	[Mar18 12:56] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205] <==
	{"level":"info","ts":"2024-03-18T12:53:10.396885Z","caller":"traceutil/trace.go:171","msg":"trace[1217743190] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; }","duration":"8.528939872s","start":"2024-03-18T12:53:01.86794Z","end":"2024-03-18T12:53:10.39688Z","steps":["trace[1217743190] 'agreement among raft nodes before linearized reading'  (duration: 8.528897642s)"],"step_count":1}
	WARNING: 2024/03/18 12:53:10 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T12:53:10.376262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.53533765s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-03-18T12:53:10.407696Z","caller":"traceutil/trace.go:171","msg":"trace[763707145] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; }","duration":"8.566680294s","start":"2024-03-18T12:53:01.840918Z","end":"2024-03-18T12:53:10.407599Z","steps":["trace[763707145] 'agreement among raft nodes before linearized reading'  (duration: 8.535336727s)"],"step_count":1}
	WARNING: 2024/03/18 12:53:10 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T12:53:10.435057Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.253:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T12:53:10.435273Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.253:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T12:53:10.435548Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3773e8bb706c8f02","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-18T12:53:10.435817Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.435961Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.436184Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.436391Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.43657Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.436808Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.436974Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.437058Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.437462Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.437623Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.437879Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.437969Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.43802Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.438159Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.440734Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.253:2380"}
	{"level":"info","ts":"2024-03-18T12:53:10.440973Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.253:2380"}
	{"level":"info","ts":"2024-03-18T12:53:10.441015Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-328109","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.253:2380"],"advertise-client-urls":["https://192.168.39.253:2379"]}
	
	
	==> etcd [999a93802a1030f438fcc2bf9271cbe9c919c4d10f93ba808cc05297ef9001ec] <==
	{"level":"warn","ts":"2024-03-18T12:56:36.196213Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.241:2380/version","remote-member-id":"4b4182f4aee369f6","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T12:56:36.196318Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"4b4182f4aee369f6","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T12:56:39.726467Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"4b4182f4aee369f6","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"27.288716ms"}
	{"level":"warn","ts":"2024-03-18T12:56:39.726606Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"1c5cd57f626d058a","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"27.432474ms"}
	{"level":"info","ts":"2024-03-18T12:56:39.728864Z","caller":"traceutil/trace.go:171","msg":"trace[470299032] linearizableReadLoop","detail":"{readStateIndex:2803; appliedIndex:2803; }","duration":"103.635202ms","start":"2024-03-18T12:56:39.625204Z","end":"2024-03-18T12:56:39.72884Z","steps":["trace[470299032] 'read index received'  (duration: 103.631328ms)","trace[470299032] 'applied index is now lower than readState.Index'  (duration: 2.635µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T12:56:39.729335Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.10916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/plndr-cp-lock\" ","response":"range_response_count:1 size:436"}
	{"level":"info","ts":"2024-03-18T12:56:39.729435Z","caller":"traceutil/trace.go:171","msg":"trace[1040339886] range","detail":"{range_begin:/registry/leases/kube-system/plndr-cp-lock; range_end:; response_count:1; response_revision:2392; }","duration":"104.238295ms","start":"2024-03-18T12:56:39.625179Z","end":"2024-03-18T12:56:39.729418Z","steps":["trace[1040339886] 'agreement among raft nodes before linearized reading'  (duration: 103.883534ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:56:39.729437Z","caller":"traceutil/trace.go:171","msg":"trace[109139705] transaction","detail":"{read_only:false; response_revision:2393; number_of_response:1; }","duration":"220.078887ms","start":"2024-03-18T12:56:39.509341Z","end":"2024-03-18T12:56:39.72942Z","steps":["trace[109139705] 'process raft request'  (duration: 219.89775ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T12:56:40.19842Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.241:2380/version","remote-member-id":"4b4182f4aee369f6","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T12:56:40.198502Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"4b4182f4aee369f6","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T12:56:40.299767Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4b4182f4aee369f6","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T12:56:40.303233Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4b4182f4aee369f6","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T12:56:44.199906Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.241:2380/version","remote-member-id":"4b4182f4aee369f6","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T12:56:44.200263Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"4b4182f4aee369f6","error":"Get \"https://192.168.39.241:2380/version\": dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T12:56:45.300286Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"4b4182f4aee369f6","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-18T12:56:45.30386Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"4b4182f4aee369f6","rtt":"0s","error":"dial tcp 192.168.39.241:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-18T12:56:45.975036Z","caller":"traceutil/trace.go:171","msg":"trace[405758580] transaction","detail":"{read_only:false; response_revision:2410; number_of_response:1; }","duration":"100.099127ms","start":"2024-03-18T12:56:45.874916Z","end":"2024-03-18T12:56:45.975015Z","steps":["trace[405758580] 'process raft request'  (duration: 99.956977ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:56:46.31649Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:56:46.316561Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:56:46.329546Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:56:46.342805Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3773e8bb706c8f02","to":"4b4182f4aee369f6","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-18T12:56:46.342878Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:56:46.34327Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3773e8bb706c8f02","to":"4b4182f4aee369f6","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-18T12:56:46.343301Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:56:53.015525Z","caller":"traceutil/trace.go:171","msg":"trace[1934948209] transaction","detail":"{read_only:false; response_revision:2447; number_of_response:1; }","duration":"132.676107ms","start":"2024-03-18T12:56:52.882824Z","end":"2024-03-18T12:56:53.0155Z","steps":["trace[1934948209] 'process raft request'  (duration: 127.509722ms)"],"step_count":1}
	
	
	==> kernel <==
	 12:57:33 up 14 min,  0 users,  load average: 0.53, 0.53, 0.34
	Linux ha-328109 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0852ff1b3a7566b792fe5630abc27f79ebadf267fb8c81bc86dd84a71da2c11d] <==
	I0318 12:56:59.272397       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:57:09.284550       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:57:09.284717       1 main.go:227] handling current node
	I0318 12:57:09.284784       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:57:09.284812       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:57:09.285037       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0318 12:57:09.285163       1 main.go:250] Node ha-328109-m03 has CIDR [10.244.2.0/24] 
	I0318 12:57:09.285263       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:57:09.285384       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:57:19.297367       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:57:19.297427       1 main.go:227] handling current node
	I0318 12:57:19.297470       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:57:19.297479       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:57:19.297663       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0318 12:57:19.297670       1 main.go:250] Node ha-328109-m03 has CIDR [10.244.2.0/24] 
	I0318 12:57:19.297723       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:57:19.297758       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:57:29.308565       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:57:29.308955       1 main.go:227] handling current node
	I0318 12:57:29.309008       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:57:29.309030       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:57:29.309317       1 main.go:223] Handling node with IPs: map[192.168.39.241:{}]
	I0318 12:57:29.309357       1 main.go:250] Node ha-328109-m03 has CIDR [10.244.2.0/24] 
	I0318 12:57:29.309436       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:57:29.309455       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [cddb56f3c76f9dc0a6c993034b68480caf7493fa7a13c6edc72f2dc5289ba517] <==
	I0318 12:54:53.051501       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 12:54:53.051617       1 main.go:107] hostIP = 192.168.39.253
	podIP = 192.168.39.253
	I0318 12:54:53.051863       1 main.go:116] setting mtu 1500 for CNI 
	I0318 12:54:53.051929       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 12:54:53.051964       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 12:54:56.443656       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0318 12:54:56.444206       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0318 12:54:57.445273       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0318 12:54:59.447524       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0318 12:55:03.291499       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [01d47c0073f4496a95a53f50e76c4c998777cfc82273e6437f07dcc8b326b896] <==
	I0318 12:55:00.213481       1 options.go:220] external host was not specified, using 192.168.39.253
	I0318 12:55:00.219474       1 server.go:148] Version: v1.28.4
	I0318 12:55:00.219563       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:55:00.633251       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 12:55:00.645045       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 12:55:00.645355       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 12:55:00.645726       1 instance.go:298] Using reconciler: lease
	W0318 12:55:20.632420       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0318 12:55:20.633587       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0318 12:55:20.647217       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0318 12:55:20.647232       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [deb997db5453b48f305b99734da3ba8a7fab972de98530d49064bdac432e8a08] <==
	I0318 12:55:44.761213       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 12:55:44.761409       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:55:44.761598       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:55:44.761930       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 12:55:44.766643       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 12:55:44.766681       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 12:55:44.829618       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:55:44.858008       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 12:55:44.859648       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 12:55:44.861996       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 12:55:44.862195       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 12:55:44.859672       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 12:55:44.863456       1 aggregator.go:166] initial CRD sync complete...
	I0318 12:55:44.863529       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 12:55:44.863553       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:55:44.863577       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:55:44.859681       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 12:55:44.867176       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 12:55:44.889483       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0318 12:55:44.892138       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.241]
	I0318 12:55:44.893823       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:55:44.902791       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0318 12:55:44.906330       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0318 12:55:45.767547       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0318 12:55:46.228928       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.241 192.168.39.246 192.168.39.253]
	
	
	==> kube-controller-manager [7e5b7e3fd47f4bbe14e7d94f794829b1574c8f826780e0c85d2bd0bd0088b1e0] <==
	I0318 12:55:57.961166       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-328109"
	I0318 12:55:57.961395       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-328109-m02"
	I0318 12:55:57.961623       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-328109-m03"
	I0318 12:55:57.961685       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-328109-m04"
	I0318 12:55:57.963205       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 12:55:57.963366       1 taint_manager.go:210] "Sending events to api server"
	I0318 12:55:57.968645       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 12:55:57.968717       1 event.go:307] "Event occurred" object="ha-328109" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-328109 event: Registered Node ha-328109 in Controller"
	I0318 12:55:57.968727       1 event.go:307] "Event occurred" object="ha-328109-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller"
	I0318 12:55:57.968738       1 event.go:307] "Event occurred" object="ha-328109-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-328109-m03 event: Registered Node ha-328109-m03 in Controller"
	I0318 12:55:57.968771       1 event.go:307] "Event occurred" object="ha-328109-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller"
	I0318 12:55:57.984768       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:55:58.029781       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 12:55:58.064257       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 12:55:58.429021       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:55:58.434495       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 12:55:58.434568       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0318 12:56:04.762944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="133.361µs"
	I0318 12:56:10.980173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="41.875618ms"
	I0318 12:56:10.980330       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="84.187µs"
	I0318 12:56:31.246411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="43.134442ms"
	I0318 12:56:31.246751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="64.471µs"
	I0318 12:56:53.019495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="143.153062ms"
	I0318 12:56:53.020679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="139.936µs"
	I0318 12:57:24.105015       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-328109-m04"
	
	
	==> kube-controller-manager [beca5c009540cfc74a33264d724cef9b109ee455809e11c09a8c296225794f65] <==
	I0318 12:55:00.574497       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:55:01.004005       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:55:01.004056       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:55:01.006179       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:55:01.006312       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:55:01.006575       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:55:01.006875       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0318 12:55:21.653718       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.253:8443/healthz\": dial tcp 192.168.39.253:8443: connect: connection refused"
	
	
	==> kube-proxy [a736d02ea6c00f9dba7ff099370fb79b5a0a10daa5881ea70f382f4ef3b8777c] <==
	E0318 12:55:23.324725       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-328109": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:55:41.757041       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-328109": dial tcp 192.168.39.254:8443: connect: no route to host
	I0318 12:55:41.757275       1 server.go:969] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
	I0318 12:55:41.834741       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:55:41.834832       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:55:41.838691       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:55:41.838848       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:55:41.839429       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:55:41.839483       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:55:41.841371       1 config.go:188] "Starting service config controller"
	I0318 12:55:41.841457       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:55:41.841499       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:55:41.841537       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:55:41.842687       1 config.go:315] "Starting node config controller"
	I0318 12:55:41.842739       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0318 12:55:44.829065       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.254:8443: connect: no route to host' (may retry after sleeping)
	W0318 12:55:44.829288       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:55:44.829412       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:55:44.829515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:55:44.829883       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:55:44.831053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:55:44.831189       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0318 12:55:45.843220       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:55:45.941761       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:55:46.342181       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6] <==
	E0318 12:52:06.718516       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:09.789257       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:09.789365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:09.789452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:09.789593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:09.789458       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:09.789857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:15.933628       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:15.933739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:15.933649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:15.933786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:15.934211       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:15.934427       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:25.148914       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:25.149038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:28.221252       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:28.221598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:28.221702       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:28.221748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:43.580362       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:43.580514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:43.580924       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:43.580982       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:52.796882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:52.797006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [4506a10fb16676d5321e24ca5eda8224cf5e32e096f326a2342ce49837ab2985] <==
	W0318 12:55:38.666460       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:38.666592       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:39.336361       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.253:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:39.336451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.253:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:40.547915       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:40.547947       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:40.664960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.253:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:40.665001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.253:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:41.080587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:41.080784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:41.182057       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.253:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:41.182169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.253:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:41.873861       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:41.873929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:42.056727       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.253:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:42.056819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.253:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:42.147887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.253:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:42.147971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.253:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:42.253757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.253:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:42.253811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.253:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:44.790550       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:55:44.790610       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:55:44.803779       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 12:55:44.803832       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:56:05.065741       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6] <==
	W0318 12:53:06.872584       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 12:53:06.872634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 12:53:07.156392       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 12:53:07.157374       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 12:53:07.157595       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 12:53:07.157630       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 12:53:07.177204       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:53:07.177417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:53:07.264977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 12:53:07.265222       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 12:53:07.348954       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 12:53:07.349173       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 12:53:07.399555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:53:07.399582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 12:53:07.603911       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 12:53:07.603964       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 12:53:09.751921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 12:53:09.752006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 12:53:10.282015       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 12:53:10.282065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 12:53:10.300475       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 12:53:10.300553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:53:10.346771       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 12:53:10.346873       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0318 12:53:10.347199       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 18 12:55:44 ha-328109 kubelet[1375]: W0318 12:55:44.827612    1375 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	Mar 18 12:55:44 ha-328109 kubelet[1375]: E0318 12:55:44.827716    1375 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1918": dial tcp 192.168.39.254:8443: connect: no route to host
	Mar 18 12:55:44 ha-328109 kubelet[1375]: E0318 12:55:44.827837    1375 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ha-328109\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-328109?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 18 12:55:44 ha-328109 kubelet[1375]: E0318 12:55:44.827852    1375 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Mar 18 12:55:44 ha-328109 kubelet[1375]: I0318 12:55:44.827921    1375 status_manager.go:853] "Failed to get status for pod" podUID="afb0afad-2b88-4abb-9039-aaf9c64ad920" pod="kube-system/kube-proxy-dhz88" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhz88\": dial tcp 192.168.39.254:8443: connect: no route to host"
	Mar 18 12:55:44 ha-328109 kubelet[1375]: E0318 12:55:44.828721    1375 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-328109?timeout=10s\": dial tcp 192.168.39.254:8443: connect: no route to host" interval="7s"
	Mar 18 12:55:47 ha-328109 kubelet[1375]: I0318 12:55:47.178465    1375 scope.go:117] "RemoveContainer" containerID="cddb56f3c76f9dc0a6c993034b68480caf7493fa7a13c6edc72f2dc5289ba517"
	Mar 18 12:55:47 ha-328109 kubelet[1375]: E0318 12:55:47.178760    1375 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-vnv5b_kube-system(fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6)\"" pod="kube-system/kindnet-vnv5b" podUID="fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6"
	Mar 18 12:55:49 ha-328109 kubelet[1375]: I0318 12:55:49.178330    1375 scope.go:117] "RemoveContainer" containerID="1e9ba73af3c0deb5fd36b57c17ac2ffa8d7cf075f05f25095bd3e2b9562928ad"
	Mar 18 12:55:49 ha-328109 kubelet[1375]: E0318 12:55:49.178630    1375 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(90ce7ae6-4ac4-4c14-b2df-1a182f4d8086)\"" pod="kube-system/storage-provisioner" podUID="90ce7ae6-4ac4-4c14-b2df-1a182f4d8086"
	Mar 18 12:55:58 ha-328109 kubelet[1375]: I0318 12:55:58.177930    1375 scope.go:117] "RemoveContainer" containerID="cddb56f3c76f9dc0a6c993034b68480caf7493fa7a13c6edc72f2dc5289ba517"
	Mar 18 12:56:03 ha-328109 kubelet[1375]: I0318 12:56:03.179345    1375 scope.go:117] "RemoveContainer" containerID="1e9ba73af3c0deb5fd36b57c17ac2ffa8d7cf075f05f25095bd3e2b9562928ad"
	Mar 18 12:56:03 ha-328109 kubelet[1375]: E0318 12:56:03.179598    1375 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(90ce7ae6-4ac4-4c14-b2df-1a182f4d8086)\"" pod="kube-system/storage-provisioner" podUID="90ce7ae6-4ac4-4c14-b2df-1a182f4d8086"
	Mar 18 12:56:05 ha-328109 kubelet[1375]: I0318 12:56:05.956570    1375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-fz4kl" podStartSLOduration=569.255106299 podCreationTimestamp="2024-03-18 12:46:34 +0000 UTC" firstStartedPulling="2024-03-18 12:46:35.888872208 +0000 UTC m=+194.968085944" lastFinishedPulling="2024-03-18 12:46:38.589829892 +0000 UTC m=+197.669043627" observedRunningTime="2024-03-18 12:46:39.187934303 +0000 UTC m=+198.267148058" watchObservedRunningTime="2024-03-18 12:56:05.956063982 +0000 UTC m=+765.035277736"
	Mar 18 12:56:18 ha-328109 kubelet[1375]: I0318 12:56:18.178579    1375 scope.go:117] "RemoveContainer" containerID="1e9ba73af3c0deb5fd36b57c17ac2ffa8d7cf075f05f25095bd3e2b9562928ad"
	Mar 18 12:56:21 ha-328109 kubelet[1375]: E0318 12:56:21.240267    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:56:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:56:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:56:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:56:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:57:21 ha-328109 kubelet[1375]: E0318 12:57:21.243785    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:57:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:57:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:57:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:57:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 12:57:31.905092 1132457 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18429-1106816/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-328109 -n ha-328109
helpers_test.go:261: (dbg) Run:  kubectl --context ha-328109 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (387.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 stop -v=7 --alsologtostderr
E0318 12:59:30.297637 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 stop -v=7 --alsologtostderr: exit status 82 (2m0.499272715s)

                                                
                                                
-- stdout --
	* Stopping node "ha-328109-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:57:52.479809 1132852 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:57:52.480112 1132852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:57:52.480123 1132852 out.go:304] Setting ErrFile to fd 2...
	I0318 12:57:52.480128 1132852 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:57:52.480393 1132852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:57:52.480702 1132852 out.go:298] Setting JSON to false
	I0318 12:57:52.480796 1132852 mustload.go:65] Loading cluster: ha-328109
	I0318 12:57:52.481161 1132852 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:57:52.481250 1132852 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:57:52.481429 1132852 mustload.go:65] Loading cluster: ha-328109
	I0318 12:57:52.481582 1132852 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:57:52.481616 1132852 stop.go:39] StopHost: ha-328109-m04
	I0318 12:57:52.482004 1132852 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:57:52.482057 1132852 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:57:52.497934 1132852 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43583
	I0318 12:57:52.498416 1132852 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:57:52.499167 1132852 main.go:141] libmachine: Using API Version  1
	I0318 12:57:52.499199 1132852 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:57:52.499597 1132852 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:57:52.502214 1132852 out.go:177] * Stopping node "ha-328109-m04"  ...
	I0318 12:57:52.503547 1132852 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 12:57:52.503586 1132852 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:57:52.503873 1132852 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 12:57:52.503906 1132852 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:57:52.506937 1132852 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:57:52.507373 1132852 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:57:18 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:57:52.507410 1132852 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:57:52.507555 1132852 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:57:52.507745 1132852 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:57:52.507913 1132852 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:57:52.508071 1132852 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	I0318 12:57:52.596803 1132852 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 12:57:52.651885 1132852 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 12:57:52.706806 1132852 main.go:141] libmachine: Stopping "ha-328109-m04"...
	I0318 12:57:52.706840 1132852 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:57:52.708532 1132852 main.go:141] libmachine: (ha-328109-m04) Calling .Stop
	I0318 12:57:52.712233 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 0/120
	I0318 12:57:53.713699 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 1/120
	I0318 12:57:54.715191 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 2/120
	I0318 12:57:55.716468 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 3/120
	I0318 12:57:56.717668 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 4/120
	I0318 12:57:57.720012 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 5/120
	I0318 12:57:58.721398 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 6/120
	I0318 12:57:59.722635 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 7/120
	I0318 12:58:00.723965 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 8/120
	I0318 12:58:01.725303 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 9/120
	I0318 12:58:02.727643 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 10/120
	I0318 12:58:03.729007 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 11/120
	I0318 12:58:04.730300 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 12/120
	I0318 12:58:05.731540 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 13/120
	I0318 12:58:06.732928 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 14/120
	I0318 12:58:07.734816 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 15/120
	I0318 12:58:08.736008 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 16/120
	I0318 12:58:09.737634 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 17/120
	I0318 12:58:10.739012 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 18/120
	I0318 12:58:11.740334 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 19/120
	I0318 12:58:12.742614 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 20/120
	I0318 12:58:13.744029 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 21/120
	I0318 12:58:14.745365 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 22/120
	I0318 12:58:15.747253 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 23/120
	I0318 12:58:16.748466 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 24/120
	I0318 12:58:17.750275 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 25/120
	I0318 12:58:18.751550 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 26/120
	I0318 12:58:19.753066 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 27/120
	I0318 12:58:20.754855 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 28/120
	I0318 12:58:21.756794 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 29/120
	I0318 12:58:22.758909 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 30/120
	I0318 12:58:23.761185 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 31/120
	I0318 12:58:24.762776 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 32/120
	I0318 12:58:25.764129 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 33/120
	I0318 12:58:26.765557 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 34/120
	I0318 12:58:27.767673 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 35/120
	I0318 12:58:28.769169 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 36/120
	I0318 12:58:29.770641 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 37/120
	I0318 12:58:30.771965 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 38/120
	I0318 12:58:31.773566 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 39/120
	I0318 12:58:32.775668 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 40/120
	I0318 12:58:33.777274 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 41/120
	I0318 12:58:34.778554 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 42/120
	I0318 12:58:35.779896 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 43/120
	I0318 12:58:36.781288 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 44/120
	I0318 12:58:37.783311 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 45/120
	I0318 12:58:38.784778 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 46/120
	I0318 12:58:39.787042 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 47/120
	I0318 12:58:40.788320 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 48/120
	I0318 12:58:41.789820 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 49/120
	I0318 12:58:42.791844 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 50/120
	I0318 12:58:43.793800 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 51/120
	I0318 12:58:44.795027 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 52/120
	I0318 12:58:45.796528 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 53/120
	I0318 12:58:46.797779 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 54/120
	I0318 12:58:47.799757 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 55/120
	I0318 12:58:48.801823 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 56/120
	I0318 12:58:49.803212 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 57/120
	I0318 12:58:50.804627 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 58/120
	I0318 12:58:51.807184 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 59/120
	I0318 12:58:52.808747 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 60/120
	I0318 12:58:53.810931 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 61/120
	I0318 12:58:54.812240 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 62/120
	I0318 12:58:55.813682 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 63/120
	I0318 12:58:56.815045 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 64/120
	I0318 12:58:57.816930 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 65/120
	I0318 12:58:58.818881 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 66/120
	I0318 12:58:59.820031 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 67/120
	I0318 12:59:00.821354 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 68/120
	I0318 12:59:01.822813 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 69/120
	I0318 12:59:02.824935 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 70/120
	I0318 12:59:03.826369 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 71/120
	I0318 12:59:04.827667 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 72/120
	I0318 12:59:05.829016 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 73/120
	I0318 12:59:06.830872 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 74/120
	I0318 12:59:07.832602 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 75/120
	I0318 12:59:08.834700 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 76/120
	I0318 12:59:09.836049 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 77/120
	I0318 12:59:10.837674 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 78/120
	I0318 12:59:11.839484 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 79/120
	I0318 12:59:12.841636 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 80/120
	I0318 12:59:13.843239 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 81/120
	I0318 12:59:14.844707 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 82/120
	I0318 12:59:15.846077 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 83/120
	I0318 12:59:16.847381 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 84/120
	I0318 12:59:17.849371 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 85/120
	I0318 12:59:18.851582 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 86/120
	I0318 12:59:19.853182 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 87/120
	I0318 12:59:20.854642 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 88/120
	I0318 12:59:21.855988 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 89/120
	I0318 12:59:22.858232 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 90/120
	I0318 12:59:23.860261 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 91/120
	I0318 12:59:24.861519 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 92/120
	I0318 12:59:25.862979 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 93/120
	I0318 12:59:26.864596 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 94/120
	I0318 12:59:27.866592 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 95/120
	I0318 12:59:28.868175 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 96/120
	I0318 12:59:29.869978 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 97/120
	I0318 12:59:30.871335 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 98/120
	I0318 12:59:31.872837 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 99/120
	I0318 12:59:32.875348 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 100/120
	I0318 12:59:33.876741 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 101/120
	I0318 12:59:34.877942 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 102/120
	I0318 12:59:35.880138 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 103/120
	I0318 12:59:36.881589 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 104/120
	I0318 12:59:37.883521 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 105/120
	I0318 12:59:38.885853 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 106/120
	I0318 12:59:39.887031 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 107/120
	I0318 12:59:40.888628 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 108/120
	I0318 12:59:41.890123 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 109/120
	I0318 12:59:42.892313 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 110/120
	I0318 12:59:43.893950 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 111/120
	I0318 12:59:44.895438 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 112/120
	I0318 12:59:45.896850 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 113/120
	I0318 12:59:46.898232 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 114/120
	I0318 12:59:47.900155 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 115/120
	I0318 12:59:48.902469 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 116/120
	I0318 12:59:49.903920 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 117/120
	I0318 12:59:50.905239 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 118/120
	I0318 12:59:51.907118 1132852 main.go:141] libmachine: (ha-328109-m04) Waiting for machine to stop 119/120
	I0318 12:59:52.907739 1132852 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 12:59:52.907846 1132852 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 12:59:52.910009 1132852 out.go:177] 
	W0318 12:59:52.911572 1132852 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 12:59:52.911585 1132852 out.go:239] * 
	* 
	W0318 12:59:52.915920 1132852 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 12:59:52.917383 1132852 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-328109 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr: exit status 3 (19.093196045s)

                                                
                                                
-- stdout --
	ha-328109
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-328109-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:59:52.981959 1133190 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:59:52.982262 1133190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:59:52.982273 1133190 out.go:304] Setting ErrFile to fd 2...
	I0318 12:59:52.982278 1133190 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:59:52.982476 1133190 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:59:52.982664 1133190 out.go:298] Setting JSON to false
	I0318 12:59:52.982708 1133190 mustload.go:65] Loading cluster: ha-328109
	I0318 12:59:52.982751 1133190 notify.go:220] Checking for updates...
	I0318 12:59:52.983205 1133190 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:59:52.983228 1133190 status.go:255] checking status of ha-328109 ...
	I0318 12:59:52.983787 1133190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:59:52.983863 1133190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:59:53.004449 1133190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37279
	I0318 12:59:53.005023 1133190 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:59:53.005649 1133190 main.go:141] libmachine: Using API Version  1
	I0318 12:59:53.005674 1133190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:59:53.006147 1133190 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:59:53.006427 1133190 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:59:53.008012 1133190 status.go:330] ha-328109 host status = "Running" (err=<nil>)
	I0318 12:59:53.008035 1133190 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:59:53.008472 1133190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:59:53.008526 1133190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:59:53.023822 1133190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41377
	I0318 12:59:53.024252 1133190 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:59:53.024760 1133190 main.go:141] libmachine: Using API Version  1
	I0318 12:59:53.024787 1133190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:59:53.025126 1133190 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:59:53.025300 1133190 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:59:53.027823 1133190 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:59:53.028255 1133190 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:59:53.028284 1133190 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:59:53.028414 1133190 host.go:66] Checking if "ha-328109" exists ...
	I0318 12:59:53.028738 1133190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:59:53.028786 1133190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:59:53.043363 1133190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40199
	I0318 12:59:53.043833 1133190 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:59:53.044269 1133190 main.go:141] libmachine: Using API Version  1
	I0318 12:59:53.044301 1133190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:59:53.044624 1133190 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:59:53.044811 1133190 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:59:53.045016 1133190 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:59:53.045038 1133190 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:59:53.047530 1133190 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:59:53.047955 1133190 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:59:53.047976 1133190 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:59:53.048094 1133190 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:59:53.048260 1133190 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:59:53.048436 1133190 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:59:53.048574 1133190 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:59:53.134771 1133190 ssh_runner.go:195] Run: systemctl --version
	I0318 12:59:53.143109 1133190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:59:53.162719 1133190 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:59:53.162749 1133190 api_server.go:166] Checking apiserver status ...
	I0318 12:59:53.162781 1133190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:59:53.180557 1133190 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5183/cgroup
	W0318 12:59:53.191683 1133190 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5183/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:59:53.191734 1133190 ssh_runner.go:195] Run: ls
	I0318 12:59:53.197959 1133190 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:59:53.204948 1133190 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:59:53.204977 1133190 status.go:422] ha-328109 apiserver status = Running (err=<nil>)
	I0318 12:59:53.204993 1133190 status.go:257] ha-328109 status: &{Name:ha-328109 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:59:53.205019 1133190 status.go:255] checking status of ha-328109-m02 ...
	I0318 12:59:53.205423 1133190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:59:53.205463 1133190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:59:53.221883 1133190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0318 12:59:53.222251 1133190 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:59:53.222763 1133190 main.go:141] libmachine: Using API Version  1
	I0318 12:59:53.222783 1133190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:59:53.223162 1133190 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:59:53.223355 1133190 main.go:141] libmachine: (ha-328109-m02) Calling .GetState
	I0318 12:59:53.224875 1133190 status.go:330] ha-328109-m02 host status = "Running" (err=<nil>)
	I0318 12:59:53.224896 1133190 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:59:53.225291 1133190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:59:53.225341 1133190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:59:53.240285 1133190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46083
	I0318 12:59:53.240783 1133190 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:59:53.241244 1133190 main.go:141] libmachine: Using API Version  1
	I0318 12:59:53.241266 1133190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:59:53.241615 1133190 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:59:53.241801 1133190 main.go:141] libmachine: (ha-328109-m02) Calling .GetIP
	I0318 12:59:53.244426 1133190 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:59:53.244889 1133190 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:55:05 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:59:53.244919 1133190 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:59:53.245071 1133190 host.go:66] Checking if "ha-328109-m02" exists ...
	I0318 12:59:53.245541 1133190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:59:53.245592 1133190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:59:53.261112 1133190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43251
	I0318 12:59:53.261599 1133190 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:59:53.262149 1133190 main.go:141] libmachine: Using API Version  1
	I0318 12:59:53.262170 1133190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:59:53.262466 1133190 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:59:53.262666 1133190 main.go:141] libmachine: (ha-328109-m02) Calling .DriverName
	I0318 12:59:53.262883 1133190 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:59:53.262912 1133190 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHHostname
	I0318 12:59:53.265471 1133190 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:59:53.265833 1133190 main.go:141] libmachine: (ha-328109-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:b0:42", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:55:05 +0000 UTC Type:0 Mac:52:54:00:8c:b0:42 Iaid: IPaddr:192.168.39.246 Prefix:24 Hostname:ha-328109-m02 Clientid:01:52:54:00:8c:b0:42}
	I0318 12:59:53.265857 1133190 main.go:141] libmachine: (ha-328109-m02) DBG | domain ha-328109-m02 has defined IP address 192.168.39.246 and MAC address 52:54:00:8c:b0:42 in network mk-ha-328109
	I0318 12:59:53.266011 1133190 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHPort
	I0318 12:59:53.266196 1133190 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHKeyPath
	I0318 12:59:53.266343 1133190 main.go:141] libmachine: (ha-328109-m02) Calling .GetSSHUsername
	I0318 12:59:53.266547 1133190 sshutil.go:53] new ssh client: &{IP:192.168.39.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m02/id_rsa Username:docker}
	I0318 12:59:53.355262 1133190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 12:59:53.375381 1133190 kubeconfig.go:125] found "ha-328109" server: "https://192.168.39.254:8443"
	I0318 12:59:53.375407 1133190 api_server.go:166] Checking apiserver status ...
	I0318 12:59:53.375447 1133190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 12:59:53.392401 1133190 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1417/cgroup
	W0318 12:59:53.406284 1133190 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1417/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 12:59:53.406353 1133190 ssh_runner.go:195] Run: ls
	I0318 12:59:53.412300 1133190 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0318 12:59:53.419373 1133190 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0318 12:59:53.419405 1133190 status.go:422] ha-328109-m02 apiserver status = Running (err=<nil>)
	I0318 12:59:53.419418 1133190 status.go:257] ha-328109-m02 status: &{Name:ha-328109-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 12:59:53.419471 1133190 status.go:255] checking status of ha-328109-m04 ...
	I0318 12:59:53.419914 1133190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:59:53.419965 1133190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:59:53.435966 1133190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44601
	I0318 12:59:53.436546 1133190 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:59:53.437149 1133190 main.go:141] libmachine: Using API Version  1
	I0318 12:59:53.437172 1133190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:59:53.437577 1133190 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:59:53.437800 1133190 main.go:141] libmachine: (ha-328109-m04) Calling .GetState
	I0318 12:59:53.439449 1133190 status.go:330] ha-328109-m04 host status = "Running" (err=<nil>)
	I0318 12:59:53.439466 1133190 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:59:53.439854 1133190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:59:53.439895 1133190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:59:53.454574 1133190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44593
	I0318 12:59:53.454988 1133190 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:59:53.455536 1133190 main.go:141] libmachine: Using API Version  1
	I0318 12:59:53.455559 1133190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:59:53.455862 1133190 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:59:53.456063 1133190 main.go:141] libmachine: (ha-328109-m04) Calling .GetIP
	I0318 12:59:53.458681 1133190 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:59:53.459096 1133190 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:57:18 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:59:53.459120 1133190 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:59:53.459295 1133190 host.go:66] Checking if "ha-328109-m04" exists ...
	I0318 12:59:53.459581 1133190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:59:53.459623 1133190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:59:53.474443 1133190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34161
	I0318 12:59:53.474872 1133190 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:59:53.475301 1133190 main.go:141] libmachine: Using API Version  1
	I0318 12:59:53.475320 1133190 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:59:53.475735 1133190 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:59:53.475961 1133190 main.go:141] libmachine: (ha-328109-m04) Calling .DriverName
	I0318 12:59:53.476192 1133190 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 12:59:53.476219 1133190 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHHostname
	I0318 12:59:53.479567 1133190 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:59:53.480094 1133190 main.go:141] libmachine: (ha-328109-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:07:cc:71", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:57:18 +0000 UTC Type:0 Mac:52:54:00:07:cc:71 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:ha-328109-m04 Clientid:01:52:54:00:07:cc:71}
	I0318 12:59:53.480133 1133190 main.go:141] libmachine: (ha-328109-m04) DBG | domain ha-328109-m04 has defined IP address 192.168.39.48 and MAC address 52:54:00:07:cc:71 in network mk-ha-328109
	I0318 12:59:53.480317 1133190 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHPort
	I0318 12:59:53.480509 1133190 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHKeyPath
	I0318 12:59:53.480698 1133190 main.go:141] libmachine: (ha-328109-m04) Calling .GetSSHUsername
	I0318 12:59:53.480839 1133190 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109-m04/id_rsa Username:docker}
	W0318 13:00:12.012523 1133190 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.48:22: connect: no route to host
	W0318 13:00:12.012658 1133190 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	E0318 13:00:12.012682 1133190 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host
	I0318 13:00:12.012690 1133190 status.go:257] ha-328109-m04 status: &{Name:ha-328109-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0318 13:00:12.012714 1133190 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.48:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-328109 -n ha-328109
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-328109 logs -n 25: (1.961673903s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-328109 ssh -n ha-328109-m02 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m03_ha-328109-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04:/home/docker/cp-test_ha-328109-m03_ha-328109-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m04 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m03_ha-328109-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp testdata/cp-test.txt                                                | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1988805859/001/cp-test_ha-328109-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109:/home/docker/cp-test_ha-328109-m04_ha-328109.txt                       |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109 sudo cat                                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109.txt                                 |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m02:/home/docker/cp-test_ha-328109-m04_ha-328109-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m02 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m03:/home/docker/cp-test_ha-328109-m04_ha-328109-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n                                                                 | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | ha-328109-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-328109 ssh -n ha-328109-m03 sudo cat                                          | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC | 18 Mar 24 12:47 UTC |
	|         | /home/docker/cp-test_ha-328109-m04_ha-328109-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-328109 node stop m02 -v=7                                                     | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:47 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-328109 node start m02 -v=7                                                    | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:50 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-328109 -v=7                                                           | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-328109 -v=7                                                                | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:51 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-328109 --wait=true -v=7                                                    | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:53 UTC | 18 Mar 24 12:57 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-328109                                                                | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:57 UTC |                     |
	| node    | ha-328109 node delete m03 -v=7                                                   | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:57 UTC | 18 Mar 24 12:57 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-328109 stop -v=7                                                              | ha-328109 | jenkins | v1.32.0 | 18 Mar 24 12:57 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:53:09
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:53:09.455626 1131437 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:53:09.455754 1131437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:53:09.455763 1131437 out.go:304] Setting ErrFile to fd 2...
	I0318 12:53:09.455768 1131437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:53:09.455932 1131437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:53:09.456505 1131437 out.go:298] Setting JSON to false
	I0318 12:53:09.457502 1131437 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":16536,"bootTime":1710749853,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:53:09.457562 1131437 start.go:139] virtualization: kvm guest
	I0318 12:53:09.460198 1131437 out.go:177] * [ha-328109] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:53:09.461706 1131437 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 12:53:09.463291 1131437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:53:09.461730 1131437 notify.go:220] Checking for updates...
	I0318 12:53:09.466076 1131437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:53:09.467454 1131437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:53:09.468769 1131437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 12:53:09.470027 1131437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:53:09.471895 1131437 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:53:09.472040 1131437 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:53:09.472450 1131437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:53:09.472524 1131437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:53:09.494619 1131437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34951
	I0318 12:53:09.495071 1131437 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:53:09.495708 1131437 main.go:141] libmachine: Using API Version  1
	I0318 12:53:09.495731 1131437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:53:09.496100 1131437 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:53:09.496317 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:53:09.531013 1131437 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 12:53:09.532476 1131437 start.go:297] selected driver: kvm2
	I0318 12:53:09.532497 1131437 start.go:901] validating driver "kvm2" against &{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:53:09.532677 1131437 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:53:09.533076 1131437 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:53:09.533173 1131437 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:53:09.547948 1131437 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:53:09.548664 1131437 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 12:53:09.548749 1131437 cni.go:84] Creating CNI manager for ""
	I0318 12:53:09.548785 1131437 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 12:53:09.548856 1131437 start.go:340] cluster config:
	{Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:53:09.548976 1131437 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:53:09.550863 1131437 out.go:177] * Starting "ha-328109" primary control-plane node in "ha-328109" cluster
	I0318 12:53:09.552228 1131437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:53:09.552275 1131437 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 12:53:09.552287 1131437 cache.go:56] Caching tarball of preloaded images
	I0318 12:53:09.552403 1131437 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 12:53:09.552416 1131437 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 12:53:09.552551 1131437 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/config.json ...
	I0318 12:53:09.552757 1131437 start.go:360] acquireMachinesLock for ha-328109: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 12:53:09.552797 1131437 start.go:364] duration metric: took 22.129µs to acquireMachinesLock for "ha-328109"
	I0318 12:53:09.552811 1131437 start.go:96] Skipping create...Using existing machine configuration
	I0318 12:53:09.552819 1131437 fix.go:54] fixHost starting: 
	I0318 12:53:09.553112 1131437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:53:09.553151 1131437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:53:09.566551 1131437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40667
	I0318 12:53:09.566979 1131437 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:53:09.567472 1131437 main.go:141] libmachine: Using API Version  1
	I0318 12:53:09.567496 1131437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:53:09.567825 1131437 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:53:09.568032 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:53:09.568191 1131437 main.go:141] libmachine: (ha-328109) Calling .GetState
	I0318 12:53:09.569595 1131437 fix.go:112] recreateIfNeeded on ha-328109: state=Running err=<nil>
	W0318 12:53:09.569613 1131437 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 12:53:09.571595 1131437 out.go:177] * Updating the running kvm2 "ha-328109" VM ...
	I0318 12:53:09.573014 1131437 machine.go:94] provisionDockerMachine start ...
	I0318 12:53:09.573036 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:53:09.573236 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:09.575790 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.576291 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:09.576345 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.576487 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:53:09.576703 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.576870 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.577034 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:53:09.577188 1131437 main.go:141] libmachine: Using SSH client type: native
	I0318 12:53:09.577387 1131437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:53:09.577397 1131437 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 12:53:09.686081 1131437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-328109
	
	I0318 12:53:09.686108 1131437 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:53:09.686351 1131437 buildroot.go:166] provisioning hostname "ha-328109"
	I0318 12:53:09.686375 1131437 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:53:09.686561 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:09.689129 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.689549 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:09.689575 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.689755 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:53:09.689953 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.690117 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.690258 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:53:09.690451 1131437 main.go:141] libmachine: Using SSH client type: native
	I0318 12:53:09.690650 1131437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:53:09.690667 1131437 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-328109 && echo "ha-328109" | sudo tee /etc/hostname
	I0318 12:53:09.820224 1131437 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-328109
	
	I0318 12:53:09.820258 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:09.822679 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.823096 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:09.823126 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.823272 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:53:09.823454 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.823623 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:09.823758 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:53:09.823928 1131437 main.go:141] libmachine: Using SSH client type: native
	I0318 12:53:09.824094 1131437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:53:09.824109 1131437 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-328109' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-328109/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-328109' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 12:53:09.930605 1131437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 12:53:09.930637 1131437 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 12:53:09.930658 1131437 buildroot.go:174] setting up certificates
	I0318 12:53:09.930667 1131437 provision.go:84] configureAuth start
	I0318 12:53:09.930677 1131437 main.go:141] libmachine: (ha-328109) Calling .GetMachineName
	I0318 12:53:09.930962 1131437 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:53:09.933844 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.934226 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:09.934255 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.934387 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:09.936594 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.936941 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:09.936974 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:09.937076 1131437 provision.go:143] copyHostCerts
	I0318 12:53:09.937118 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:53:09.937154 1131437 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 12:53:09.937165 1131437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 12:53:09.937231 1131437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 12:53:09.937300 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:53:09.937324 1131437 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 12:53:09.937336 1131437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 12:53:09.937363 1131437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 12:53:09.937412 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:53:09.937428 1131437 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 12:53:09.937434 1131437 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 12:53:09.937454 1131437 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 12:53:09.937497 1131437 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.ha-328109 san=[127.0.0.1 192.168.39.253 ha-328109 localhost minikube]
	I0318 12:53:10.042323 1131437 provision.go:177] copyRemoteCerts
	I0318 12:53:10.042400 1131437 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 12:53:10.042426 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:10.044882 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:10.045334 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:10.045363 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:10.045600 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:53:10.045891 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:10.046093 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:53:10.046245 1131437 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:53:10.132669 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 12:53:10.132763 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 12:53:10.167965 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 12:53:10.168038 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0318 12:53:10.203920 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 12:53:10.203985 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 12:53:10.234682 1131437 provision.go:87] duration metric: took 304.0003ms to configureAuth
	I0318 12:53:10.234716 1131437 buildroot.go:189] setting minikube options for container-runtime
	I0318 12:53:10.234998 1131437 config.go:182] Loaded profile config "ha-328109": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:53:10.235124 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:53:10.237631 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:10.238024 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:53:10.238049 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:53:10.238178 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:53:10.238361 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:10.238504 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:53:10.238625 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:53:10.238793 1131437 main.go:141] libmachine: Using SSH client type: native
	I0318 12:53:10.238966 1131437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:53:10.238987 1131437 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 12:54:41.248658 1131437 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 12:54:41.248696 1131437 machine.go:97] duration metric: took 1m31.675664373s to provisionDockerMachine
	I0318 12:54:41.248713 1131437 start.go:293] postStartSetup for "ha-328109" (driver="kvm2")
	I0318 12:54:41.248725 1131437 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 12:54:41.248744 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.249146 1131437 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 12:54:41.249190 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:54:41.252456 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.252926 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.252950 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.253085 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:54:41.253285 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.253473 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:54:41.253622 1131437 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:54:41.336717 1131437 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 12:54:41.341914 1131437 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 12:54:41.341957 1131437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 12:54:41.342049 1131437 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 12:54:41.342138 1131437 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 12:54:41.342149 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /etc/ssl/certs/11141362.pem
	I0318 12:54:41.342292 1131437 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 12:54:41.353634 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:54:41.380795 1131437 start.go:296] duration metric: took 132.063863ms for postStartSetup
	I0318 12:54:41.380853 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.381178 1131437 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0318 12:54:41.381206 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:54:41.383999 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.384377 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.384405 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.384537 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:54:41.384750 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.384947 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:54:41.385105 1131437 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	W0318 12:54:41.468533 1131437 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0318 12:54:41.468562 1131437 fix.go:56] duration metric: took 1m31.915743876s for fixHost
	I0318 12:54:41.468608 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:54:41.471380 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.471769 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.471799 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.472007 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:54:41.472222 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.472433 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.472552 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:54:41.472721 1131437 main.go:141] libmachine: Using SSH client type: native
	I0318 12:54:41.472961 1131437 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.253 22 <nil> <nil>}
	I0318 12:54:41.472976 1131437 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 12:54:41.577873 1131437 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710766481.545492902
	
	I0318 12:54:41.577902 1131437 fix.go:216] guest clock: 1710766481.545492902
	I0318 12:54:41.577912 1131437 fix.go:229] Guest: 2024-03-18 12:54:41.545492902 +0000 UTC Remote: 2024-03-18 12:54:41.468591753 +0000 UTC m=+92.063283113 (delta=76.901149ms)
	I0318 12:54:41.577934 1131437 fix.go:200] guest clock delta is within tolerance: 76.901149ms
	I0318 12:54:41.577939 1131437 start.go:83] releasing machines lock for "ha-328109", held for 1m32.025133507s
	I0318 12:54:41.577994 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.578319 1131437 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:54:41.580943 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.581338 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.581378 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.581503 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.582117 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.582302 1131437 main.go:141] libmachine: (ha-328109) Calling .DriverName
	I0318 12:54:41.582400 1131437 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 12:54:41.582448 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:54:41.582529 1131437 ssh_runner.go:195] Run: cat /version.json
	I0318 12:54:41.582547 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHHostname
	I0318 12:54:41.584944 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.585287 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.585342 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.585411 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:54:41.585422 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.585579 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.585738 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:54:41.585884 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:41.585915 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:41.585916 1131437 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:54:41.586052 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHPort
	I0318 12:54:41.586204 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHKeyPath
	I0318 12:54:41.586384 1131437 main.go:141] libmachine: (ha-328109) Calling .GetSSHUsername
	I0318 12:54:41.586562 1131437 sshutil.go:53] new ssh client: &{IP:192.168.39.253 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/ha-328109/id_rsa Username:docker}
	I0318 12:54:41.662255 1131437 ssh_runner.go:195] Run: systemctl --version
	I0318 12:54:41.691308 1131437 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 12:54:41.863307 1131437 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 12:54:41.872558 1131437 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 12:54:41.872643 1131437 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 12:54:41.882670 1131437 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 12:54:41.882692 1131437 start.go:494] detecting cgroup driver to use...
	I0318 12:54:41.882750 1131437 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 12:54:41.899679 1131437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 12:54:41.914952 1131437 docker.go:217] disabling cri-docker service (if available) ...
	I0318 12:54:41.915030 1131437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 12:54:41.930037 1131437 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 12:54:41.945579 1131437 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 12:54:42.101881 1131437 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 12:54:42.264933 1131437 docker.go:233] disabling docker service ...
	I0318 12:54:42.265001 1131437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 12:54:42.283512 1131437 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 12:54:42.297842 1131437 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 12:54:42.458746 1131437 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 12:54:42.613087 1131437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 12:54:42.627693 1131437 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 12:54:42.650018 1131437 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 12:54:42.650087 1131437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:54:42.661645 1131437 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 12:54:42.661714 1131437 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:54:42.673195 1131437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:54:42.684966 1131437 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 12:54:42.696709 1131437 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 12:54:42.708955 1131437 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 12:54:42.719257 1131437 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 12:54:42.729549 1131437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:54:42.878150 1131437 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 12:54:51.483164 1131437 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.604954948s)
	I0318 12:54:51.483205 1131437 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 12:54:51.483263 1131437 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 12:54:51.489699 1131437 start.go:562] Will wait 60s for crictl version
	I0318 12:54:51.489748 1131437 ssh_runner.go:195] Run: which crictl
	I0318 12:54:51.494036 1131437 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 12:54:51.541634 1131437 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 12:54:51.541719 1131437 ssh_runner.go:195] Run: crio --version
	I0318 12:54:51.578631 1131437 ssh_runner.go:195] Run: crio --version
	I0318 12:54:51.613758 1131437 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 12:54:51.615160 1131437 main.go:141] libmachine: (ha-328109) Calling .GetIP
	I0318 12:54:51.617668 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:51.617992 1131437 main.go:141] libmachine: (ha-328109) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:53:6b:a9", ip: ""} in network mk-ha-328109: {Iface:virbr1 ExpiryTime:2024-03-18 13:42:48 +0000 UTC Type:0 Mac:52:54:00:53:6b:a9 Iaid: IPaddr:192.168.39.253 Prefix:24 Hostname:ha-328109 Clientid:01:52:54:00:53:6b:a9}
	I0318 12:54:51.618021 1131437 main.go:141] libmachine: (ha-328109) DBG | domain ha-328109 has defined IP address 192.168.39.253 and MAC address 52:54:00:53:6b:a9 in network mk-ha-328109
	I0318 12:54:51.618174 1131437 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 12:54:51.623657 1131437 kubeadm.go:877] updating cluster {Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 12:54:51.623802 1131437 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:54:51.623847 1131437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:54:51.671851 1131437 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 12:54:51.671873 1131437 crio.go:415] Images already preloaded, skipping extraction
	I0318 12:54:51.671933 1131437 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 12:54:51.708378 1131437 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 12:54:51.708405 1131437 cache_images.go:84] Images are preloaded, skipping loading
	I0318 12:54:51.708414 1131437 kubeadm.go:928] updating node { 192.168.39.253 8443 v1.28.4 crio true true} ...
	I0318 12:54:51.708548 1131437 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-328109 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.253
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 12:54:51.708634 1131437 ssh_runner.go:195] Run: crio config
	I0318 12:54:51.765085 1131437 cni.go:84] Creating CNI manager for ""
	I0318 12:54:51.765111 1131437 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0318 12:54:51.765123 1131437 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 12:54:51.765147 1131437 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.253 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-328109 NodeName:ha-328109 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.253"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.253 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 12:54:51.765352 1131437 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.253
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-328109"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.253
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.253"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 12:54:51.765376 1131437 kube-vip.go:111] generating kube-vip config ...
	I0318 12:54:51.765438 1131437 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0318 12:54:51.779130 1131437 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0318 12:54:51.779287 1131437 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0318 12:54:51.779387 1131437 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 12:54:51.790217 1131437 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 12:54:51.790288 1131437 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0318 12:54:51.800654 1131437 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0318 12:54:51.819725 1131437 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 12:54:51.838314 1131437 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0318 12:54:51.857126 1131437 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0318 12:54:51.877808 1131437 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0318 12:54:51.882286 1131437 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 12:54:52.036553 1131437 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 12:54:52.052829 1131437 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109 for IP: 192.168.39.253
	I0318 12:54:52.052868 1131437 certs.go:194] generating shared ca certs ...
	I0318 12:54:52.052892 1131437 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:54:52.053111 1131437 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 12:54:52.053161 1131437 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 12:54:52.053171 1131437 certs.go:256] generating profile certs ...
	I0318 12:54:52.053251 1131437 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/client.key
	I0318 12:54:52.053278 1131437 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.8c23f119
	I0318 12:54:52.053298 1131437 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.8c23f119 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.253 192.168.39.246 192.168.39.241 192.168.39.254]
	I0318 12:54:52.207972 1131437 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.8c23f119 ...
	I0318 12:54:52.208012 1131437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.8c23f119: {Name:mkbd66155d7290e4053cdbaf559cad07c945947d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:54:52.208206 1131437 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.8c23f119 ...
	I0318 12:54:52.208219 1131437 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.8c23f119: {Name:mk22b0e81237fd60af1980ed17fc7999742b869d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:54:52.208286 1131437 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt.8c23f119 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt
	I0318 12:54:52.208525 1131437 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key.8c23f119 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key
	I0318 12:54:52.208675 1131437 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key
	I0318 12:54:52.208692 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 12:54:52.208704 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 12:54:52.208723 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 12:54:52.208733 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 12:54:52.208748 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 12:54:52.208760 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 12:54:52.208774 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 12:54:52.208784 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 12:54:52.208840 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 12:54:52.208878 1131437 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 12:54:52.208888 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 12:54:52.208919 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 12:54:52.208942 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 12:54:52.208967 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 12:54:52.209001 1131437 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 12:54:52.209030 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem -> /usr/share/ca-certificates/1114136.pem
	I0318 12:54:52.209044 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /usr/share/ca-certificates/11141362.pem
	I0318 12:54:52.209056 1131437 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:54:52.209933 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 12:54:52.287577 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 12:54:52.329714 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 12:54:52.356968 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 12:54:52.388896 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 12:54:52.416373 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 12:54:52.444943 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 12:54:52.476929 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/ha-328109/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 12:54:52.511334 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 12:54:52.538242 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 12:54:52.564846 1131437 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 12:54:52.604622 1131437 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 12:54:52.639201 1131437 ssh_runner.go:195] Run: openssl version
	I0318 12:54:52.646185 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 12:54:52.662103 1131437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 12:54:52.667308 1131437 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 12:54:52.667416 1131437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 12:54:52.673670 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 12:54:52.687024 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 12:54:52.699801 1131437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:54:52.705106 1131437 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:54:52.705156 1131437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 12:54:52.711744 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 12:54:52.723263 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 12:54:52.736043 1131437 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 12:54:52.740964 1131437 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 12:54:52.741004 1131437 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 12:54:52.747018 1131437 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 12:54:52.757419 1131437 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 12:54:52.762166 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 12:54:52.768182 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 12:54:52.774288 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 12:54:52.780310 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 12:54:52.790899 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 12:54:52.797096 1131437 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 12:54:52.803146 1131437 kubeadm.go:391] StartCluster: {Name:ha-328109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-328109 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.253 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.246 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.241 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.48 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:54:52.803310 1131437 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 12:54:52.803385 1131437 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 12:54:52.851910 1131437 cri.go:89] found id: "1a189929210d919728381e96a6d9318a1e316fa85c27ebef21233d0d47127af6"
	I0318 12:54:52.851939 1131437 cri.go:89] found id: "dde4e0e7b3b10dff8ab01deb49351214d58d396e82ba052fdb95f8eab71407ae"
	I0318 12:54:52.851944 1131437 cri.go:89] found id: "575d4d72c34ad10849775cbb98cf1577b733b01400518921cbab3e061da5b2cd"
	I0318 12:54:52.851955 1131437 cri.go:89] found id: "5ab2478c4da6ad7b5451bbe4902eef614054446a19eb7b3d8d3c785dbeb01621"
	I0318 12:54:52.851959 1131437 cri.go:89] found id: "0b630b0fc05d4dd89718593f42880e41e071014b4d0f87791cba4fbf8cbe8785"
	I0318 12:54:52.851964 1131437 cri.go:89] found id: "742842736e1b52735c8a18b3d61ed7ee1d6157f2ca03ec317995f36597c45ac6"
	I0318 12:54:52.851968 1131437 cri.go:89] found id: "82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135"
	I0318 12:54:52.851971 1131437 cri.go:89] found id: "f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562"
	I0318 12:54:52.851975 1131437 cri.go:89] found id: "f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6"
	I0318 12:54:52.851982 1131437 cri.go:89] found id: "55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205"
	I0318 12:54:52.851989 1131437 cri.go:89] found id: "de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6"
	I0318 12:54:52.851992 1131437 cri.go:89] found id: "a10929bb9737267586a458e8f8aac60622ae3a299b6b542776e59e2b12e4ffef"
	I0318 12:54:52.851996 1131437 cri.go:89] found id: "7e2150d8010e2a1399f1df83c9dba81c77d606e55e0c21b18da231e82e01413a"
	I0318 12:54:52.851999 1131437 cri.go:89] found id: ""
	I0318 12:54:52.852054 1131437 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.765245298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766812765218257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a1cb6702-801d-4d1f-84cb-087a8d82530b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.765989288Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a912967-991a-4e61-9b91-3abe6ec247b8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.766219052Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a912967-991a-4e61-9b91-3abe6ec247b8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.767178718Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea108d58a3b234c0a0aa9835ddc50f9701d8677f0f71cf4a0b341c8408bdc220,PodSandboxId:9528776ae09e3d86c54b708cae447fa7ecddd1f1e9936f355919903ccce52807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710766578192740100,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0852ff1b3a7566b792fe5630abc27f79ebadf267fb8c81bc86dd84a71da2c11d,PodSandboxId:36a87ec33cb1a5df3431b0f399ed41ef482fbd8d27ced950186cfa118964465d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710766558199627731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5b7e3fd47f4bbe14e7d94f794829b1574c8f826780e0c85d2bd0bd0088b1e0,PodSandboxId:37c1ac272d35c69afe5c8ab7f748f3a1d9310cc8eef1ce6398d6be4ab0f8b1cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710766543207774604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb997db5453b48f305b99734da3ba8a7fab972de98530d49064bdac432e8a08,PodSandboxId:ea022fd052d1562a0b5415522b6442f1fc4adb6a946b91b9a1b97d654f7c1245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710766542189594357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ba73af3c0deb5fd36b57c17ac2ffa8d7cf075f05f25095bd3e2b9562928ad,PodSandboxId:9528776ae09e3d86c54b708cae447fa7ecddd1f1e9936f355919903ccce52807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710766535201288625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262b1f5b882c6f972233f82ea6c8c37741119410b426deccb897e2c2ddef5bae,PodSandboxId:f57e4a0707ff2dfc3ab957755d401ec1a4e0bbbb7d59517a3f4fe4601d7d5ef8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710766532572756133,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde47888a6d380a6b21e060a28901be678b90bd9441d281802531d5d40ae7090,PodSandboxId:cf59423d7268db021441ccd23b8b2036b0ac62116b7b1a4d758ce0b602386af9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710766499797614520,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a736d02ea6c00f9dba7ff099370fb79b5a0a10daa5881ea70f382f4ef3b8777c,PodSandboxId:f90d0e204275b03bf497101219d5714c78a4b431332dfae63f2e69c096c794da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710766499124671144,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f6eaed81
eb434344922005711291d3960965a5b6f4210f844c7981f0c1f817c,PodSandboxId:5e6ea275123b34fed36cd49b3e2cde6def832872312bd46730db68ed6a2508ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710766499485293460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4506a10fb16676d5321e24ca5eda8224cf5e32e096f326a2342ce49837ab2985,PodSandboxId:3cdd9461a9c1e7d693b563c139aba05dca5a597a66da89de1c27792c0daf86ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710766499349536355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc22b73970f8bb427a03d4e37b8a268623ff9be743eec8bbda4c734eecadba72,PodSandboxId:df6e30eed668e970a9b759629e41489911069ac3b081f6040020882c07f9b027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710766499416993376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d47c0073f4496a95a53f50e76c4c998777cfc82273e6437f07dcc8b326b896,PodSandboxId:ea022fd052d1562a0b5415522b6442f1fc4adb6a946b91b9a1b97d654f7c1245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710766499063447609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beca5c009540cfc74a33264d724cef9b109ee455809e11c09a8c296225794f65,PodSandboxId:37c1ac272d35c69afe5c8ab7f748f3a1d9310cc8eef1ce6398d6be4ab0f8b1cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710766499065238664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999a93802a1030f438fcc2bf9271cbe9c919c4d10f93ba808cc05297ef9001ec,PodSandboxId:0309587cbe9bf9210f7ea34a08933b0df48ed944ce2c7b522048372147f3aa89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710766499024633400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Anno
tations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cddb56f3c76f9dc0a6c993034b68480caf7493fa7a13c6edc72f2dc5289ba517,PodSandboxId:36a87ec33cb1a5df3431b0f399ed41ef482fbd8d27ced950186cfa118964465d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710766492596704153,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubern
etes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575d4d72c34ad10849775cbb98cf1577b733b01400518921cbab3e061da5b2cd,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710766292205383311,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710765998607477170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710765818122667843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710765818092289225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710765812830565830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1710765791394325942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710765791364285455,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a912967-991a-4e61-9b91-3abe6ec247b8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.828693156Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ac85453f-5b59-4d22-b744-e9f91db821a5 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.828805361Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ac85453f-5b59-4d22-b744-e9f91db821a5 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.830581114Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cdda00ed-30ab-4505-bdc2-226566ab6543 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.831441100Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766812831415051,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cdda00ed-30ab-4505-bdc2-226566ab6543 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.832063139Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04bfbe47-57f4-40e6-8318-6db928344e9b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.832468088Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04bfbe47-57f4-40e6-8318-6db928344e9b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.832881436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea108d58a3b234c0a0aa9835ddc50f9701d8677f0f71cf4a0b341c8408bdc220,PodSandboxId:9528776ae09e3d86c54b708cae447fa7ecddd1f1e9936f355919903ccce52807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710766578192740100,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0852ff1b3a7566b792fe5630abc27f79ebadf267fb8c81bc86dd84a71da2c11d,PodSandboxId:36a87ec33cb1a5df3431b0f399ed41ef482fbd8d27ced950186cfa118964465d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710766558199627731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5b7e3fd47f4bbe14e7d94f794829b1574c8f826780e0c85d2bd0bd0088b1e0,PodSandboxId:37c1ac272d35c69afe5c8ab7f748f3a1d9310cc8eef1ce6398d6be4ab0f8b1cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710766543207774604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb997db5453b48f305b99734da3ba8a7fab972de98530d49064bdac432e8a08,PodSandboxId:ea022fd052d1562a0b5415522b6442f1fc4adb6a946b91b9a1b97d654f7c1245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710766542189594357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ba73af3c0deb5fd36b57c17ac2ffa8d7cf075f05f25095bd3e2b9562928ad,PodSandboxId:9528776ae09e3d86c54b708cae447fa7ecddd1f1e9936f355919903ccce52807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710766535201288625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262b1f5b882c6f972233f82ea6c8c37741119410b426deccb897e2c2ddef5bae,PodSandboxId:f57e4a0707ff2dfc3ab957755d401ec1a4e0bbbb7d59517a3f4fe4601d7d5ef8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710766532572756133,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde47888a6d380a6b21e060a28901be678b90bd9441d281802531d5d40ae7090,PodSandboxId:cf59423d7268db021441ccd23b8b2036b0ac62116b7b1a4d758ce0b602386af9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710766499797614520,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a736d02ea6c00f9dba7ff099370fb79b5a0a10daa5881ea70f382f4ef3b8777c,PodSandboxId:f90d0e204275b03bf497101219d5714c78a4b431332dfae63f2e69c096c794da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710766499124671144,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f6eaed81
eb434344922005711291d3960965a5b6f4210f844c7981f0c1f817c,PodSandboxId:5e6ea275123b34fed36cd49b3e2cde6def832872312bd46730db68ed6a2508ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710766499485293460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4506a10fb16676d5321e24ca5eda8224cf5e32e096f326a2342ce49837ab2985,PodSandboxId:3cdd9461a9c1e7d693b563c139aba05dca5a597a66da89de1c27792c0daf86ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710766499349536355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc22b73970f8bb427a03d4e37b8a268623ff9be743eec8bbda4c734eecadba72,PodSandboxId:df6e30eed668e970a9b759629e41489911069ac3b081f6040020882c07f9b027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710766499416993376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d47c0073f4496a95a53f50e76c4c998777cfc82273e6437f07dcc8b326b896,PodSandboxId:ea022fd052d1562a0b5415522b6442f1fc4adb6a946b91b9a1b97d654f7c1245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710766499063447609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beca5c009540cfc74a33264d724cef9b109ee455809e11c09a8c296225794f65,PodSandboxId:37c1ac272d35c69afe5c8ab7f748f3a1d9310cc8eef1ce6398d6be4ab0f8b1cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710766499065238664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999a93802a1030f438fcc2bf9271cbe9c919c4d10f93ba808cc05297ef9001ec,PodSandboxId:0309587cbe9bf9210f7ea34a08933b0df48ed944ce2c7b522048372147f3aa89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710766499024633400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Anno
tations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cddb56f3c76f9dc0a6c993034b68480caf7493fa7a13c6edc72f2dc5289ba517,PodSandboxId:36a87ec33cb1a5df3431b0f399ed41ef482fbd8d27ced950186cfa118964465d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710766492596704153,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubern
etes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575d4d72c34ad10849775cbb98cf1577b733b01400518921cbab3e061da5b2cd,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710766292205383311,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710765998607477170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710765818122667843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710765818092289225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710765812830565830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1710765791394325942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710765791364285455,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04bfbe47-57f4-40e6-8318-6db928344e9b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.890573597Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ea2198d2-9796-4d24-9770-3bae89d0faf0 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.890647595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ea2198d2-9796-4d24-9770-3bae89d0faf0 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.891911412Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4909bada-4f9c-4492-86d5-0684229f6605 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.892457166Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766812892432710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4909bada-4f9c-4492-86d5-0684229f6605 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.893210575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bb33dae-d050-4ea1-9446-876c8fa1dba2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.893272662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bb33dae-d050-4ea1-9446-876c8fa1dba2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.893809556Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea108d58a3b234c0a0aa9835ddc50f9701d8677f0f71cf4a0b341c8408bdc220,PodSandboxId:9528776ae09e3d86c54b708cae447fa7ecddd1f1e9936f355919903ccce52807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710766578192740100,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0852ff1b3a7566b792fe5630abc27f79ebadf267fb8c81bc86dd84a71da2c11d,PodSandboxId:36a87ec33cb1a5df3431b0f399ed41ef482fbd8d27ced950186cfa118964465d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710766558199627731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5b7e3fd47f4bbe14e7d94f794829b1574c8f826780e0c85d2bd0bd0088b1e0,PodSandboxId:37c1ac272d35c69afe5c8ab7f748f3a1d9310cc8eef1ce6398d6be4ab0f8b1cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710766543207774604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb997db5453b48f305b99734da3ba8a7fab972de98530d49064bdac432e8a08,PodSandboxId:ea022fd052d1562a0b5415522b6442f1fc4adb6a946b91b9a1b97d654f7c1245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710766542189594357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ba73af3c0deb5fd36b57c17ac2ffa8d7cf075f05f25095bd3e2b9562928ad,PodSandboxId:9528776ae09e3d86c54b708cae447fa7ecddd1f1e9936f355919903ccce52807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710766535201288625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262b1f5b882c6f972233f82ea6c8c37741119410b426deccb897e2c2ddef5bae,PodSandboxId:f57e4a0707ff2dfc3ab957755d401ec1a4e0bbbb7d59517a3f4fe4601d7d5ef8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710766532572756133,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde47888a6d380a6b21e060a28901be678b90bd9441d281802531d5d40ae7090,PodSandboxId:cf59423d7268db021441ccd23b8b2036b0ac62116b7b1a4d758ce0b602386af9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710766499797614520,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a736d02ea6c00f9dba7ff099370fb79b5a0a10daa5881ea70f382f4ef3b8777c,PodSandboxId:f90d0e204275b03bf497101219d5714c78a4b431332dfae63f2e69c096c794da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710766499124671144,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f6eaed81
eb434344922005711291d3960965a5b6f4210f844c7981f0c1f817c,PodSandboxId:5e6ea275123b34fed36cd49b3e2cde6def832872312bd46730db68ed6a2508ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710766499485293460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4506a10fb16676d5321e24ca5eda8224cf5e32e096f326a2342ce49837ab2985,PodSandboxId:3cdd9461a9c1e7d693b563c139aba05dca5a597a66da89de1c27792c0daf86ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710766499349536355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc22b73970f8bb427a03d4e37b8a268623ff9be743eec8bbda4c734eecadba72,PodSandboxId:df6e30eed668e970a9b759629e41489911069ac3b081f6040020882c07f9b027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710766499416993376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d47c0073f4496a95a53f50e76c4c998777cfc82273e6437f07dcc8b326b896,PodSandboxId:ea022fd052d1562a0b5415522b6442f1fc4adb6a946b91b9a1b97d654f7c1245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710766499063447609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beca5c009540cfc74a33264d724cef9b109ee455809e11c09a8c296225794f65,PodSandboxId:37c1ac272d35c69afe5c8ab7f748f3a1d9310cc8eef1ce6398d6be4ab0f8b1cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710766499065238664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999a93802a1030f438fcc2bf9271cbe9c919c4d10f93ba808cc05297ef9001ec,PodSandboxId:0309587cbe9bf9210f7ea34a08933b0df48ed944ce2c7b522048372147f3aa89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710766499024633400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Anno
tations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cddb56f3c76f9dc0a6c993034b68480caf7493fa7a13c6edc72f2dc5289ba517,PodSandboxId:36a87ec33cb1a5df3431b0f399ed41ef482fbd8d27ced950186cfa118964465d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710766492596704153,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubern
etes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575d4d72c34ad10849775cbb98cf1577b733b01400518921cbab3e061da5b2cd,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710766292205383311,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710765998607477170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710765818122667843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710765818092289225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710765812830565830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1710765791394325942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710765791364285455,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bb33dae-d050-4ea1-9446-876c8fa1dba2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.947405849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=23fd9945-ac39-45de-bcd5-72c9e680de45 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.947482330Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=23fd9945-ac39-45de-bcd5-72c9e680de45 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.949599451Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c032f44-7e0f-4ddf-8180-c5632ec2eeb6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.950613124Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710766812950500392,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c032f44-7e0f-4ddf-8180-c5632ec2eeb6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.951929018Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5a48b53-a2a7-44cb-961d-4836dfa1f0b7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.952067152Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5a48b53-a2a7-44cb-961d-4836dfa1f0b7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:00:12 ha-328109 crio[3851]: time="2024-03-18 13:00:12.952545717Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ea108d58a3b234c0a0aa9835ddc50f9701d8677f0f71cf4a0b341c8408bdc220,PodSandboxId:9528776ae09e3d86c54b708cae447fa7ecddd1f1e9936f355919903ccce52807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710766578192740100,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 4,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0852ff1b3a7566b792fe5630abc27f79ebadf267fb8c81bc86dd84a71da2c11d,PodSandboxId:36a87ec33cb1a5df3431b0f399ed41ef482fbd8d27ced950186cfa118964465d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710766558199627731,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubernetes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termina
tion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e5b7e3fd47f4bbe14e7d94f794829b1574c8f826780e0c85d2bd0bd0088b1e0,PodSandboxId:37c1ac272d35c69afe5c8ab7f748f3a1d9310cc8eef1ce6398d6be4ab0f8b1cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710766543207774604,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deb997db5453b48f305b99734da3ba8a7fab972de98530d49064bdac432e8a08,PodSandboxId:ea022fd052d1562a0b5415522b6442f1fc4adb6a946b91b9a1b97d654f7c1245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710766542189594357,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9ba73af3c0deb5fd36b57c17ac2ffa8d7cf075f05f25095bd3e2b9562928ad,PodSandboxId:9528776ae09e3d86c54b708cae447fa7ecddd1f1e9936f355919903ccce52807,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710766535201288625,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90ce7ae6-4ac4-4c14-b2df-1a182f4d8086,},Annotations:map[string]string{io.kubernetes.container.hash: ed6ee57,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262b1f5b882c6f972233f82ea6c8c37741119410b426deccb897e2c2ddef5bae,PodSandboxId:f57e4a0707ff2dfc3ab957755d401ec1a4e0bbbb7d59517a3f4fe4601d7d5ef8,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710766532572756133,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,
io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bde47888a6d380a6b21e060a28901be678b90bd9441d281802531d5d40ae7090,PodSandboxId:cf59423d7268db021441ccd23b8b2036b0ac62116b7b1a4d758ce0b602386af9,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710766499797614520,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePerio
d: 30,},},&Container{Id:a736d02ea6c00f9dba7ff099370fb79b5a0a10daa5881ea70f382f4ef3b8777c,PodSandboxId:f90d0e204275b03bf497101219d5714c78a4b431332dfae63f2e69c096c794da,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710766499124671144,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f6eaed81
eb434344922005711291d3960965a5b6f4210f844c7981f0c1f817c,PodSandboxId:5e6ea275123b34fed36cd49b3e2cde6def832872312bd46730db68ed6a2508ef,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710766499485293460,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,i
o.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4506a10fb16676d5321e24ca5eda8224cf5e32e096f326a2342ce49837ab2985,PodSandboxId:3cdd9461a9c1e7d693b563c139aba05dca5a597a66da89de1c27792c0daf86ef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710766499349536355,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc22b73970f8bb427a03d4e37b8a268623ff9be743eec8bbda4c734eecadba72,PodSandboxId:df6e30eed668e970a9b759629e41489911069ac3b081f6040020882c07f9b027,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710766499416993376,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},
{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01d47c0073f4496a95a53f50e76c4c998777cfc82273e6437f07dcc8b326b896,PodSandboxId:ea022fd052d1562a0b5415522b6442f1fc4adb6a946b91b9a1b97d654f7c1245,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710766499063447609,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 0919befc6ed870de46dfd820b38f0ac2,},Annotations:map[string]string{io.kubernetes.container.hash: 110d18ba,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:beca5c009540cfc74a33264d724cef9b109ee455809e11c09a8c296225794f65,PodSandboxId:37c1ac272d35c69afe5c8ab7f748f3a1d9310cc8eef1ce6398d6be4ab0f8b1cb,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710766499065238664,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: f004a20401b95f693a90cc8d0b7e8acc,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:999a93802a1030f438fcc2bf9271cbe9c919c4d10f93ba808cc05297ef9001ec,PodSandboxId:0309587cbe9bf9210f7ea34a08933b0df48ed944ce2c7b522048372147f3aa89,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710766499024633400,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Anno
tations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cddb56f3c76f9dc0a6c993034b68480caf7493fa7a13c6edc72f2dc5289ba517,PodSandboxId:36a87ec33cb1a5df3431b0f399ed41ef482fbd8d27ced950186cfa118964465d,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710766492596704153,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-vnv5b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc2583b6-a5b3-4f53-bf54-6cc7611fc2a6,},Annotations:map[string]string{io.kubern
etes.container.hash: 9aa5dbe1,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:575d4d72c34ad10849775cbb98cf1577b733b01400518921cbab3e061da5b2cd,PodSandboxId:2f84d6cd36a0e19e1f074479696d192558cade4f5b4267d45bfa78281643ee69,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710766292205383311,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c9ffc89cd42ea8da4e6070b43e0ace35,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kuberne
tes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5b3318798546b55a1e7fe3618fe7848b8cb4108312aa2f5354c7dbdc9103e72,PodSandboxId:10b35c5d18ac59942090a6917bedf01b1f31744cd5f0a3d39949835bf6108d5a,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710765998607477170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-fz4kl,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 5a0215bb-df62-44b9-9d60-d45778880b8b,},Annotations:map[string]string{io.kubernetes.container.hash: 25c17d37,io.kubernet
es.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135,PodSandboxId:b487ae421169c8afbdd3c57cd6781dfee8b050a5ec9476b5eb7d8d46c81511c6,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710765818122667843,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p5xgj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a865f86-96cf-4687-9283-d2ebe5616d1a,},Annotations:map[string]string{io.kubernetes.container.hash: b948acd7,io.kubernetes.container.ports: [{\"name\":\
"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562,PodSandboxId:16503713d19863c7d11d4a566e3591316bf9bb87017c1247be871b73cd241150,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710765818092289225,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c78nc,io.kubernetes.pod.nam
espace: kube-system,io.kubernetes.pod.uid: 7c1159dc-6545-41a6-bb4a-75fdab519c9e,},Annotations:map[string]string{io.kubernetes.container.hash: 5111e8b8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6,PodSandboxId:35275a602be1c60babb8ca88eca935f3264c955bbf0347e589ea368f3036d635,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2
899304398e,State:CONTAINER_EXITED,CreatedAt:1710765812830565830,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-dhz88,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: afb0afad-2b88-4abb-9039-aaf9c64ad920,},Annotations:map[string]string{io.kubernetes.container.hash: 34178776,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205,PodSandboxId:8231d33571b5e6a87638a5647fcc9e70ced44830421377dda3555afca480b302,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,Create
dAt:1710765791394325942,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e31f7b77f2cd8547e7aa12e86f29a80,},Annotations:map[string]string{io.kubernetes.container.hash: a6edf2fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6,PodSandboxId:8cfa0459c6e2ae66756a8424cb981cdb5680680fc5907eba1b8d83cfdd1a7280,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710765791364285455,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-328109,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 90e740be10e7ccb198e1e310b9749e68,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5a48b53-a2a7-44cb-961d-4836dfa1f0b7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ea108d58a3b23       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       4                   9528776ae09e3       storage-provisioner
	0852ff1b3a756       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   36a87ec33cb1a       kindnet-vnv5b
	7e5b7e3fd47f4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      4 minutes ago       Running             kube-controller-manager   2                   37c1ac272d35c       kube-controller-manager-ha-328109
	deb997db5453b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      4 minutes ago       Running             kube-apiserver            3                   ea022fd052d15       kube-apiserver-ha-328109
	1e9ba73af3c0d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Exited              storage-provisioner       3                   9528776ae09e3       storage-provisioner
	262b1f5b882c6       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   f57e4a0707ff2       busybox-5b5d89c9d6-fz4kl
	bde47888a6d38       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  3                   cf59423d7268d       kube-vip-ha-328109
	6f6eaed81eb43       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   1                   5e6ea275123b3       coredns-5dd5756b68-c78nc
	fc22b73970f8b       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   1                   df6e30eed668e       coredns-5dd5756b68-p5xgj
	4506a10fb1667       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      5 minutes ago       Running             kube-scheduler            1                   3cdd9461a9c1e       kube-scheduler-ha-328109
	a736d02ea6c00       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      5 minutes ago       Running             kube-proxy                1                   f90d0e204275b       kube-proxy-dhz88
	beca5c009540c       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      5 minutes ago       Exited              kube-controller-manager   1                   37c1ac272d35c       kube-controller-manager-ha-328109
	01d47c0073f44       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      5 minutes ago       Exited              kube-apiserver            2                   ea022fd052d15       kube-apiserver-ha-328109
	999a93802a103       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      5 minutes ago       Running             etcd                      1                   0309587cbe9bf       etcd-ha-328109
	cddb56f3c76f9       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   36a87ec33cb1a       kindnet-vnv5b
	575d4d72c34ad       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      8 minutes ago       Exited              kube-vip                  2                   2f84d6cd36a0e       kube-vip-ha-328109
	c5b3318798546       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   10b35c5d18ac5       busybox-5b5d89c9d6-fz4kl
	82a8d2ac6a60c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   b487ae421169c       coredns-5dd5756b68-p5xgj
	f2c5cd4a72423       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   16503713d1986       coredns-5dd5756b68-c78nc
	f8d915a384e6a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      16 minutes ago      Exited              kube-proxy                0                   35275a602be1c       kube-proxy-dhz88
	55e393cf77a1b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      17 minutes ago      Exited              etcd                      0                   8231d33571b5e       etcd-ha-328109
	de552ed42d495       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      17 minutes ago      Exited              kube-scheduler            0                   8cfa0459c6e2a       kube-scheduler-ha-328109
	
	
	==> coredns [6f6eaed81eb434344922005711291d3960965a5b6f4210f844c7981f0c1f817c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55799 - 41388 "HINFO IN 6639687177075769404.563070019675184471. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.020507334s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:53548->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [82a8d2ac6a60c0d04e48a38416de7feb33d590cfcd74d28da2317aa1a5781135] <==
	[INFO] 10.244.0.4:56631 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152161s
	[INFO] 10.244.0.4:45190 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103827s
	[INFO] 10.244.2.2:34185 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105521s
	[INFO] 10.244.2.2:44888 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000730863s
	[INFO] 10.244.1.2:40647 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000166359s
	[INFO] 10.244.1.2:57968 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001882507s
	[INFO] 10.244.1.2:55297 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000096788s
	[INFO] 10.244.1.2:36989 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000088322s
	[INFO] 10.244.1.2:37677 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000205894s
	[INFO] 10.244.1.2:32814 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074605s
	[INFO] 10.244.1.2:44489 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000102528s
	[INFO] 10.244.0.4:53607 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000206955s
	[INFO] 10.244.2.2:47974 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000313502s
	[INFO] 10.244.1.2:49641 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000193514s
	[INFO] 10.244.1.2:52193 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000126417s
	[INFO] 10.244.1.2:55887 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104434s
	[INFO] 10.244.0.4:43288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014747s
	[INFO] 10.244.0.4:57574 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000192178s
	[INFO] 10.244.0.4:58440 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000128408s
	[INFO] 10.244.2.2:50297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000168343s
	[INFO] 10.244.2.2:37188 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000133774s
	[INFO] 10.244.1.2:33883 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000095091s
	[INFO] 10.244.1.2:45785 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000123693s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f2c5cd4a724230c91f476a1bb5326701801eff1b70dc4db0510f092d89ea1562] <==
	[INFO] 10.244.0.4:54630 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003946825s
	[INFO] 10.244.0.4:37807 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000185941s
	[INFO] 10.244.0.4:54881 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000227886s
	[INFO] 10.244.2.2:43048 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000261065s
	[INFO] 10.244.2.2:43023 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001686526s
	[INFO] 10.244.2.2:59097 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000204051s
	[INFO] 10.244.2.2:49621 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000262805s
	[INFO] 10.244.2.2:48119 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001371219s
	[INFO] 10.244.2.2:49912 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000148592s
	[INFO] 10.244.1.2:60652 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0016374s
	[INFO] 10.244.0.4:55891 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000079534s
	[INFO] 10.244.0.4:53025 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000231262s
	[INFO] 10.244.0.4:39659 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116818s
	[INFO] 10.244.2.2:48403 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125802s
	[INFO] 10.244.2.2:42106 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000092079s
	[INFO] 10.244.2.2:41088 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000204572s
	[INFO] 10.244.1.2:60379 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108875s
	[INFO] 10.244.0.4:42381 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00008263s
	[INFO] 10.244.2.2:47207 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000237181s
	[INFO] 10.244.2.2:44002 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000102925s
	[INFO] 10.244.1.2:54332 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000126486s
	[INFO] 10.244.1.2:38590 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000245357s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1935&timeout=7m20s&timeoutSeconds=440&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: unexpected EOF
	
	
	==> coredns [fc22b73970f8bb427a03d4e37b8a268623ff9be743eec8bbda4c734eecadba72] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36261 - 4647 "HINFO IN 2596574517611928040.3468582005389048924. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.066906463s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> describe nodes <==
	Name:               ha-328109
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T12_43_22_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:43:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:00:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:55:54 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:55:54 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:55:54 +0000   Mon, 18 Mar 2024 12:43:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:55:54 +0000   Mon, 18 Mar 2024 12:43:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.253
	  Hostname:    ha-328109
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8b3a9b95f2141b891e3cee14aaad62e
	  System UUID:                a8b3a9b9-5f21-41b8-91e3-cee14aaad62e
	  Boot ID:                    906b8684-634a-4838-bb8e-d090694f9649
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-fz4kl             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-c78nc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-5dd5756b68-p5xgj             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-328109                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-vnv5b                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-328109             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-328109    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-dhz88                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-328109             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-328109                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m31s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-328109 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)      kubelet          Node ha-328109 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-328109 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 17m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-328109 status is now: NodeHasSufficientPID
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-328109 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-328109 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           16m                    node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-328109 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Warning  ContainerGCFailed        5m52s (x2 over 6m52s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-328109 event: Registered Node ha-328109 in Controller
	
	
	Name:               ha-328109-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_44_39_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:44:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:00:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 12:56:28 +0000   Mon, 18 Mar 2024 12:55:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 12:56:28 +0000   Mon, 18 Mar 2024 12:55:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 12:56:28 +0000   Mon, 18 Mar 2024 12:55:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 12:56:28 +0000   Mon, 18 Mar 2024 12:55:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.246
	  Hostname:    ha-328109-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 148457ca2d4c4c78bdc5b74dba85e93e
	  System UUID:                148457ca-2d4c-4c78-bdc5-b74dba85e93e
	  Boot ID:                    a393a97b-91e0-431e-93b5-6e815ca4673f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-sx4mf                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-328109-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-lc74t                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-328109-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-328109-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-7zgrx                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-328109-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-328109-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m6s                   kube-proxy       
	  Normal  RegisteredNode           15m                    node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-328109-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m54s (x8 over 4m54s)  kubelet          Node ha-328109-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m54s (x8 over 4m54s)  kubelet          Node ha-328109-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m54s (x7 over 4m54s)  kubelet          Node ha-328109-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m54s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	  Normal  RegisteredNode           3m10s                  node-controller  Node ha-328109-m02 event: Registered Node ha-328109-m02 in Controller
	
	
	Name:               ha-328109-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-328109-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=ha-328109
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T12_47_16_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 12:47:15 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-328109-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 12:57:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 12:57:24 +0000   Mon, 18 Mar 2024 12:58:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 12:57:24 +0000   Mon, 18 Mar 2024 12:58:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 12:57:24 +0000   Mon, 18 Mar 2024 12:58:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 12:57:24 +0000   Mon, 18 Mar 2024 12:58:27 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.48
	  Hostname:    ha-328109-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac08f798ce4148b48f36040f95b7eaf9
	  System UUID:                ac08f798-ce41-48b4-8f36-040f95b7eaf9
	  Boot ID:                    d839d937-93b3-471f-bd80-ed3e21d7b7e5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-bqffh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-ggcw6               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-4fxbn            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m46s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)      kubelet          Node ha-328109-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)      kubelet          Node ha-328109-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)      kubelet          Node ha-328109-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           12m                    node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-328109-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   RegisteredNode           4m16s                  node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   RegisteredNode           3m10s                  node-controller  Node ha-328109-m04 event: Registered Node ha-328109-m04 in Controller
	  Normal   Starting                 2m50s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  2m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeNotReady             2m50s                  kubelet          Node ha-328109-m04 status is now: NodeNotReady
	  Normal   NodeHasSufficientMemory  2m49s (x4 over 2m50s)  kubelet          Node ha-328109-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m49s (x4 over 2m50s)  kubelet          Node ha-328109-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m49s (x4 over 2m50s)  kubelet          Node ha-328109-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m49s (x3 over 2m50s)  kubelet          Node ha-328109-m04 has been rebooted, boot id: d839d937-93b3-471f-bd80-ed3e21d7b7e5
	  Normal   NodeReady                2m49s (x2 over 2m49s)  kubelet          Node ha-328109-m04 status is now: NodeReady
	  Normal   NodeNotReady             106s (x2 over 3m36s)   node-controller  Node ha-328109-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.058901] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058875] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.159253] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.141446] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.251865] systemd-fstab-generator[666]: Ignoring "noauto" option for root device
	[Mar18 12:43] systemd-fstab-generator[768]: Ignoring "noauto" option for root device
	[  +0.059542] kauditd_printk_skb: 130 callbacks suppressed
	[  +3.985090] systemd-fstab-generator[943]: Ignoring "noauto" option for root device
	[  +1.363754] kauditd_printk_skb: 57 callbacks suppressed
	[  +7.738793] kauditd_printk_skb: 40 callbacks suppressed
	[  +1.856189] systemd-fstab-generator[1368]: Ignoring "noauto" option for root device
	[ +11.678244] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.089183] kauditd_printk_skb: 37 callbacks suppressed
	[Mar18 12:44] kauditd_printk_skb: 27 callbacks suppressed
	[Mar18 12:54] systemd-fstab-generator[3774]: Ignoring "noauto" option for root device
	[  +0.159104] systemd-fstab-generator[3786]: Ignoring "noauto" option for root device
	[  +0.192027] systemd-fstab-generator[3800]: Ignoring "noauto" option for root device
	[  +0.165625] systemd-fstab-generator[3812]: Ignoring "noauto" option for root device
	[  +0.263127] systemd-fstab-generator[3836]: Ignoring "noauto" option for root device
	[  +9.153822] systemd-fstab-generator[3940]: Ignoring "noauto" option for root device
	[  +0.088827] kauditd_printk_skb: 100 callbacks suppressed
	[  +6.629321] kauditd_printk_skb: 22 callbacks suppressed
	[Mar18 12:55] kauditd_printk_skb: 83 callbacks suppressed
	[ +29.962164] kauditd_printk_skb: 5 callbacks suppressed
	[Mar18 12:56] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [55e393cf77a1b472d984125ae3bd870d3fed9dca4eeefc346bda04ae88654205] <==
	{"level":"info","ts":"2024-03-18T12:53:10.396885Z","caller":"traceutil/trace.go:171","msg":"trace[1217743190] range","detail":"{range_begin:/registry/namespaces/; range_end:/registry/namespaces0; }","duration":"8.528939872s","start":"2024-03-18T12:53:01.86794Z","end":"2024-03-18T12:53:10.39688Z","steps":["trace[1217743190] 'agreement among raft nodes before linearized reading'  (duration: 8.528897642s)"],"step_count":1}
	WARNING: 2024/03/18 12:53:10 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T12:53:10.376262Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"8.53533765s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-03-18T12:53:10.407696Z","caller":"traceutil/trace.go:171","msg":"trace[763707145] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; }","duration":"8.566680294s","start":"2024-03-18T12:53:01.840918Z","end":"2024-03-18T12:53:10.407599Z","steps":["trace[763707145] 'agreement among raft nodes before linearized reading'  (duration: 8.535336727s)"],"step_count":1}
	WARNING: 2024/03/18 12:53:10 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T12:53:10.435057Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.253:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T12:53:10.435273Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.253:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T12:53:10.435548Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"3773e8bb706c8f02","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-18T12:53:10.435817Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.435961Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.436184Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.436391Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.43657Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.436808Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.436974Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:53:10.437058Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.437462Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.437623Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.437879Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.437969Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.43802Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3773e8bb706c8f02","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.438159Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"1c5cd57f626d058a"}
	{"level":"info","ts":"2024-03-18T12:53:10.440734Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.253:2380"}
	{"level":"info","ts":"2024-03-18T12:53:10.440973Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.253:2380"}
	{"level":"info","ts":"2024-03-18T12:53:10.441015Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-328109","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.253:2380"],"advertise-client-urls":["https://192.168.39.253:2379"]}
	
	
	==> etcd [999a93802a1030f438fcc2bf9271cbe9c919c4d10f93ba808cc05297ef9001ec] <==
	{"level":"info","ts":"2024-03-18T12:56:46.316561Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:56:46.329546Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:56:46.342805Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3773e8bb706c8f02","to":"4b4182f4aee369f6","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-18T12:56:46.342878Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:56:46.34327Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"3773e8bb706c8f02","to":"4b4182f4aee369f6","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-18T12:56:46.343301Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:56:53.015525Z","caller":"traceutil/trace.go:171","msg":"trace[1934948209] transaction","detail":"{read_only:false; response_revision:2447; number_of_response:1; }","duration":"132.676107ms","start":"2024-03-18T12:56:52.882824Z","end":"2024-03-18T12:56:53.0155Z","steps":["trace[1934948209] 'process raft request'  (duration: 127.509722ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T12:57:38.64488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3773e8bb706c8f02 switched to configuration voters=(2043743074008237450 3995793186150452994)"}
	{"level":"info","ts":"2024-03-18T12:57:38.645263Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"4606fbf8165cac5a","local-member-id":"3773e8bb706c8f02","removed-remote-peer-id":"4b4182f4aee369f6","removed-remote-peer-urls":["https://192.168.39.241:2380"]}
	{"level":"info","ts":"2024-03-18T12:57:38.645378Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"warn","ts":"2024-03-18T12:57:38.64557Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:57:38.645634Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"warn","ts":"2024-03-18T12:57:38.645801Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:57:38.645854Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:57:38.646016Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"warn","ts":"2024-03-18T12:57:38.646398Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6","error":"context canceled"}
	{"level":"warn","ts":"2024-03-18T12:57:38.64648Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"4b4182f4aee369f6","error":"failed to read 4b4182f4aee369f6 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-18T12:57:38.646563Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"warn","ts":"2024-03-18T12:57:38.646849Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6","error":"context canceled"}
	{"level":"info","ts":"2024-03-18T12:57:38.646916Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"3773e8bb706c8f02","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:57:38.646944Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"4b4182f4aee369f6"}
	{"level":"info","ts":"2024-03-18T12:57:38.646959Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"3773e8bb706c8f02","removed-remote-peer-id":"4b4182f4aee369f6"}
	{"level":"warn","ts":"2024-03-18T12:57:38.667841Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.241:52504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-03-18T12:57:38.675421Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"3773e8bb706c8f02","remote-peer-id-stream-handler":"3773e8bb706c8f02","remote-peer-id-from":"4b4182f4aee369f6"}
	{"level":"warn","ts":"2024-03-18T12:57:38.681701Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.241:36486","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:00:13 up 17 min,  0 users,  load average: 0.28, 0.36, 0.30
	Linux ha-328109 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0852ff1b3a7566b792fe5630abc27f79ebadf267fb8c81bc86dd84a71da2c11d] <==
	I0318 12:59:29.483667       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:59:39.499368       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:59:39.499432       1 main.go:227] handling current node
	I0318 12:59:39.499447       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:59:39.499456       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:59:39.499577       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:59:39.499609       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:59:49.511451       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:59:49.511496       1 main.go:227] handling current node
	I0318 12:59:49.511506       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:59:49.511512       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:59:49.511657       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:59:49.511665       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 12:59:59.546557       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 12:59:59.546613       1 main.go:227] handling current node
	I0318 12:59:59.546649       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 12:59:59.546661       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 12:59:59.546804       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 12:59:59.546812       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	I0318 13:00:09.557374       1 main.go:223] Handling node with IPs: map[192.168.39.253:{}]
	I0318 13:00:09.557576       1 main.go:227] handling current node
	I0318 13:00:09.557610       1 main.go:223] Handling node with IPs: map[192.168.39.246:{}]
	I0318 13:00:09.557630       1 main.go:250] Node ha-328109-m02 has CIDR [10.244.1.0/24] 
	I0318 13:00:09.557840       1 main.go:223] Handling node with IPs: map[192.168.39.48:{}]
	I0318 13:00:09.557865       1 main.go:250] Node ha-328109-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [cddb56f3c76f9dc0a6c993034b68480caf7493fa7a13c6edc72f2dc5289ba517] <==
	I0318 12:54:53.051501       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0318 12:54:53.051617       1 main.go:107] hostIP = 192.168.39.253
	podIP = 192.168.39.253
	I0318 12:54:53.051863       1 main.go:116] setting mtu 1500 for CNI 
	I0318 12:54:53.051929       1 main.go:146] kindnetd IP family: "ipv4"
	I0318 12:54:53.051964       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0318 12:54:56.443656       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0318 12:54:56.444206       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0318 12:54:57.445273       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0318 12:54:59.447524       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0318 12:55:03.291499       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kube-apiserver [01d47c0073f4496a95a53f50e76c4c998777cfc82273e6437f07dcc8b326b896] <==
	I0318 12:55:00.213481       1 options.go:220] external host was not specified, using 192.168.39.253
	I0318 12:55:00.219474       1 server.go:148] Version: v1.28.4
	I0318 12:55:00.219563       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:55:00.633251       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0318 12:55:00.645045       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0318 12:55:00.645355       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0318 12:55:00.645726       1 instance.go:298] Using reconciler: lease
	W0318 12:55:20.632420       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0318 12:55:20.633587       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0318 12:55:20.647217       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0318 12:55:20.647232       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [deb997db5453b48f305b99734da3ba8a7fab972de98530d49064bdac432e8a08] <==
	I0318 12:55:44.761213       1 controller.go:78] Starting OpenAPI AggregationController
	I0318 12:55:44.761409       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:55:44.761598       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0318 12:55:44.761930       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0318 12:55:44.766643       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0318 12:55:44.766681       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0318 12:55:44.829618       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 12:55:44.858008       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 12:55:44.859648       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 12:55:44.861996       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 12:55:44.862195       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 12:55:44.859672       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 12:55:44.863456       1 aggregator.go:166] initial CRD sync complete...
	I0318 12:55:44.863529       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 12:55:44.863553       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 12:55:44.863577       1 cache.go:39] Caches are synced for autoregister controller
	I0318 12:55:44.859681       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 12:55:44.867176       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 12:55:44.889483       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0318 12:55:44.892138       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.241]
	I0318 12:55:44.893823       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 12:55:44.902791       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0318 12:55:44.906330       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0318 12:55:45.767547       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0318 12:55:46.228928       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.241 192.168.39.246 192.168.39.253]
	
	
	==> kube-controller-manager [7e5b7e3fd47f4bbe14e7d94f794829b1574c8f826780e0c85d2bd0bd0088b1e0] <==
	I0318 12:57:35.407862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="219.368436ms"
	I0318 12:57:35.546727       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="138.385985ms"
	E0318 12:57:35.547293       1 replica_set.go:557] sync "default/busybox-5b5d89c9d6" failed with Operation cannot be fulfilled on replicasets.apps "busybox-5b5d89c9d6": the object has been modified; please apply your changes to the latest version and try again
	I0318 12:57:35.548023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="283.492µs"
	I0318 12:57:35.554305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="135.317µs"
	I0318 12:57:37.355378       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="140.673µs"
	I0318 12:57:38.041675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="110.808µs"
	I0318 12:57:38.079462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="123.533µs"
	I0318 12:57:38.088206       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="138.802µs"
	I0318 12:57:39.486178       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.586595ms"
	I0318 12:57:39.486422       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="80.669µs"
	I0318 12:57:50.312784       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-328109-m04"
	I0318 12:57:52.992682       1 event.go:307] "Event occurred" object="ha-328109-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-328109-m03 event: Removing Node ha-328109-m03 from Controller"
	E0318 12:57:57.883359       1 gc_controller.go:153] "Failed to get node" err="node \"ha-328109-m03\" not found" node="ha-328109-m03"
	E0318 12:57:57.883419       1 gc_controller.go:153] "Failed to get node" err="node \"ha-328109-m03\" not found" node="ha-328109-m03"
	E0318 12:57:57.883433       1 gc_controller.go:153] "Failed to get node" err="node \"ha-328109-m03\" not found" node="ha-328109-m03"
	E0318 12:57:57.883445       1 gc_controller.go:153] "Failed to get node" err="node \"ha-328109-m03\" not found" node="ha-328109-m03"
	E0318 12:57:57.883454       1 gc_controller.go:153] "Failed to get node" err="node \"ha-328109-m03\" not found" node="ha-328109-m03"
	E0318 12:58:17.884059       1 gc_controller.go:153] "Failed to get node" err="node \"ha-328109-m03\" not found" node="ha-328109-m03"
	E0318 12:58:17.884313       1 gc_controller.go:153] "Failed to get node" err="node \"ha-328109-m03\" not found" node="ha-328109-m03"
	E0318 12:58:17.884343       1 gc_controller.go:153] "Failed to get node" err="node \"ha-328109-m03\" not found" node="ha-328109-m03"
	E0318 12:58:17.884368       1 gc_controller.go:153] "Failed to get node" err="node \"ha-328109-m03\" not found" node="ha-328109-m03"
	E0318 12:58:17.884393       1 gc_controller.go:153] "Failed to get node" err="node \"ha-328109-m03\" not found" node="ha-328109-m03"
	I0318 12:58:27.749513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.722441ms"
	I0318 12:58:27.749643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.807µs"
	
	
	==> kube-controller-manager [beca5c009540cfc74a33264d724cef9b109ee455809e11c09a8c296225794f65] <==
	I0318 12:55:00.574497       1 serving.go:348] Generated self-signed cert in-memory
	I0318 12:55:01.004005       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0318 12:55:01.004056       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:55:01.006179       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0318 12:55:01.006312       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 12:55:01.006575       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 12:55:01.006875       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0318 12:55:21.653718       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.253:8443/healthz\": dial tcp 192.168.39.253:8443: connect: connection refused"
	
	
	==> kube-proxy [a736d02ea6c00f9dba7ff099370fb79b5a0a10daa5881ea70f382f4ef3b8777c] <==
	I0318 12:55:41.834741       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 12:55:41.834832       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 12:55:41.838691       1 server_others.go:152] "Using iptables Proxier"
	I0318 12:55:41.838848       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 12:55:41.839429       1 server.go:846] "Version info" version="v1.28.4"
	I0318 12:55:41.839483       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 12:55:41.841371       1 config.go:188] "Starting service config controller"
	I0318 12:55:41.841457       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 12:55:41.841499       1 config.go:97] "Starting endpoint slice config controller"
	I0318 12:55:41.841537       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 12:55:41.842687       1 config.go:315] "Starting node config controller"
	I0318 12:55:41.842739       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0318 12:55:44.829065       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.254:8443: connect: no route to host' (may retry after sleeping)
	W0318 12:55:44.829288       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:55:44.829412       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:55:44.829515       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:55:44.829883       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:55:44.831053       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:55:44.831189       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0318 12:55:45.843220       1 shared_informer.go:318] Caches are synced for node config
	I0318 12:55:45.941761       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 12:55:46.342181       1 shared_informer.go:318] Caches are synced for service config
	W0318 12:58:40.608025       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.EndpointSlice ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0318 12:58:40.608554       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	W0318 12:58:40.608563       1 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	
	
	==> kube-proxy [f8d915a384e6a3c259a15968303b0ddc686a9ced49722152813fc101b3c78cc6] <==
	E0318 12:52:06.718516       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:09.789257       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:09.789365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:09.789452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:09.789593       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:09.789458       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:09.789857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:15.933628       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:15.933739       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:15.933649       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:15.933786       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:15.934211       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:15.934427       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:25.148914       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:25.149038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:28.221252       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:28.221598       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:28.221702       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:28.221748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:43.580362       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:43.580514       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1851": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:43.580924       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:43.580982       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1850": dial tcp 192.168.39.254:8443: connect: no route to host
	W0318 12:52:52.796882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	E0318 12:52:52.797006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-328109&resourceVersion=1884": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [4506a10fb16676d5321e24ca5eda8224cf5e32e096f326a2342ce49837ab2985] <==
	W0318 12:55:40.547915       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:40.547947       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:40.664960       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.253:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:40.665001       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.253:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:41.080587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:41.080784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:41.182057       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.253:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:41.182169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.253:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:41.873861       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:41.873929       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.253:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:42.056727       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.253:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:42.056819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.253:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:42.147887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.39.253:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:42.147971       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.253:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:42.253757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.39.253:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	E0318 12:55:42.253811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.253:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.253:8443: connect: connection refused
	W0318 12:55:44.790550       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:55:44.790610       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:55:44.803779       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 12:55:44.803832       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 12:56:05.065741       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0318 12:57:35.263288       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-bqffh\": pod busybox-5b5d89c9d6-bqffh is already assigned to node \"ha-328109-m04\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-bqffh" node="ha-328109-m04"
	E0318 12:57:35.263787       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod a24589fe-8dd2-437f-b0b5-9e1b6a9e244b(default/busybox-5b5d89c9d6-bqffh) wasn't assumed so cannot be forgotten"
	E0318 12:57:35.264039       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-bqffh\": pod busybox-5b5d89c9d6-bqffh is already assigned to node \"ha-328109-m04\"" pod="default/busybox-5b5d89c9d6-bqffh"
	I0318 12:57:35.264283       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-bqffh" node="ha-328109-m04"
	
	
	==> kube-scheduler [de552ed42d49524bbca97633e73d6ac4e5301a813a012290635def375a78dcd6] <==
	W0318 12:53:06.872584       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 12:53:06.872634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 12:53:07.156392       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 12:53:07.157374       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 12:53:07.157595       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 12:53:07.157630       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 12:53:07.177204       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 12:53:07.177417       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 12:53:07.264977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 12:53:07.265222       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 12:53:07.348954       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 12:53:07.349173       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 12:53:07.399555       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 12:53:07.399582       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 12:53:07.603911       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 12:53:07.603964       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 12:53:09.751921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 12:53:09.752006       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 12:53:10.282015       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 12:53:10.282065       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 12:53:10.300475       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 12:53:10.300553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0318 12:53:10.346771       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0318 12:53:10.346873       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0318 12:53:10.347199       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 18 12:55:58 ha-328109 kubelet[1375]: I0318 12:55:58.177930    1375 scope.go:117] "RemoveContainer" containerID="cddb56f3c76f9dc0a6c993034b68480caf7493fa7a13c6edc72f2dc5289ba517"
	Mar 18 12:56:03 ha-328109 kubelet[1375]: I0318 12:56:03.179345    1375 scope.go:117] "RemoveContainer" containerID="1e9ba73af3c0deb5fd36b57c17ac2ffa8d7cf075f05f25095bd3e2b9562928ad"
	Mar 18 12:56:03 ha-328109 kubelet[1375]: E0318 12:56:03.179598    1375 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(90ce7ae6-4ac4-4c14-b2df-1a182f4d8086)\"" pod="kube-system/storage-provisioner" podUID="90ce7ae6-4ac4-4c14-b2df-1a182f4d8086"
	Mar 18 12:56:05 ha-328109 kubelet[1375]: I0318 12:56:05.956570    1375 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-fz4kl" podStartSLOduration=569.255106299 podCreationTimestamp="2024-03-18 12:46:34 +0000 UTC" firstStartedPulling="2024-03-18 12:46:35.888872208 +0000 UTC m=+194.968085944" lastFinishedPulling="2024-03-18 12:46:38.589829892 +0000 UTC m=+197.669043627" observedRunningTime="2024-03-18 12:46:39.187934303 +0000 UTC m=+198.267148058" watchObservedRunningTime="2024-03-18 12:56:05.956063982 +0000 UTC m=+765.035277736"
	Mar 18 12:56:18 ha-328109 kubelet[1375]: I0318 12:56:18.178579    1375 scope.go:117] "RemoveContainer" containerID="1e9ba73af3c0deb5fd36b57c17ac2ffa8d7cf075f05f25095bd3e2b9562928ad"
	Mar 18 12:56:21 ha-328109 kubelet[1375]: E0318 12:56:21.240267    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:56:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:56:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:56:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:56:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:57:21 ha-328109 kubelet[1375]: E0318 12:57:21.243785    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:57:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:57:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:57:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:57:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:58:21 ha-328109 kubelet[1375]: E0318 12:58:21.241183    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:58:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:58:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:58:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:58:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 12:59:21 ha-328109 kubelet[1375]: E0318 12:59:21.241509    1375 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 12:59:21 ha-328109 kubelet[1375]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 12:59:21 ha-328109 kubelet[1375]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 12:59:21 ha-328109 kubelet[1375]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 12:59:21 ha-328109 kubelet[1375]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:00:12.413780 1133330 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18429-1106816/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-328109 -n ha-328109
helpers_test.go:261: (dbg) Run:  kubectl --context ha-328109 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (306.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-229365
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-229365
E0318 13:16:24.906080 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-229365: exit status 82 (2m2.702325946s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-229365-m03"  ...
	* Stopping node "multinode-229365-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-229365" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-229365 --wait=true -v=8 --alsologtostderr
E0318 13:19:13.349511 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 13:19:27.953098 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 13:19:30.296792 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-229365 --wait=true -v=8 --alsologtostderr: (3m0.844130874s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-229365
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-229365 -n multinode-229365
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-229365 logs -n 25: (1.674267054s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m02:/home/docker/cp-test.txt                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile690292982/001/cp-test_multinode-229365-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m02:/home/docker/cp-test.txt                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365:/home/docker/cp-test_multinode-229365-m02_multinode-229365.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n multinode-229365 sudo cat                                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_multinode-229365-m02_multinode-229365.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m02:/home/docker/cp-test.txt                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03:/home/docker/cp-test_multinode-229365-m02_multinode-229365-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n multinode-229365-m03 sudo cat                                   | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_multinode-229365-m02_multinode-229365-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp testdata/cp-test.txt                                                | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m03:/home/docker/cp-test.txt                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile690292982/001/cp-test_multinode-229365-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m03:/home/docker/cp-test.txt                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365:/home/docker/cp-test_multinode-229365-m03_multinode-229365.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n multinode-229365 sudo cat                                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_multinode-229365-m03_multinode-229365.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m03:/home/docker/cp-test.txt                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m02:/home/docker/cp-test_multinode-229365-m03_multinode-229365-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n multinode-229365-m02 sudo cat                                   | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_multinode-229365-m03_multinode-229365-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-229365 node stop m03                                                          | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	| node    | multinode-229365 node start                                                             | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-229365                                                                | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC |                     |
	| stop    | -p multinode-229365                                                                     | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC |                     |
	| start   | -p multinode-229365                                                                     | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:16 UTC | 18 Mar 24 13:19 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-229365                                                                | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:19 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:16:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:16:48.995649 1141442 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:16:48.995763 1141442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:16:48.995771 1141442 out.go:304] Setting ErrFile to fd 2...
	I0318 13:16:48.995775 1141442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:16:48.995962 1141442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:16:48.996547 1141442 out.go:298] Setting JSON to false
	I0318 13:16:48.997544 1141442 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17956,"bootTime":1710749853,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:16:48.997612 1141442 start.go:139] virtualization: kvm guest
	I0318 13:16:49.000387 1141442 out.go:177] * [multinode-229365] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:16:49.002016 1141442 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:16:49.002056 1141442 notify.go:220] Checking for updates...
	I0318 13:16:49.003467 1141442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:16:49.005122 1141442 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:16:49.006459 1141442 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:16:49.007708 1141442 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:16:49.008971 1141442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:16:49.010594 1141442 config.go:182] Loaded profile config "multinode-229365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:16:49.010741 1141442 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:16:49.011209 1141442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:49.011257 1141442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:49.026401 1141442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I0318 13:16:49.026861 1141442 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:49.027386 1141442 main.go:141] libmachine: Using API Version  1
	I0318 13:16:49.027406 1141442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:49.027794 1141442 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:49.028022 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:16:49.062220 1141442 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:16:49.063405 1141442 start.go:297] selected driver: kvm2
	I0318 13:16:49.063418 1141442 start.go:901] validating driver "kvm2" against &{Name:multinode-229365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-229365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.34 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:16:49.063551 1141442 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:16:49.063898 1141442 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:16:49.063976 1141442 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:16:49.078514 1141442 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:16:49.079470 1141442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:16:49.079557 1141442 cni.go:84] Creating CNI manager for ""
	I0318 13:16:49.079575 1141442 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:16:49.079647 1141442 start.go:340] cluster config:
	{Name:multinode-229365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-229365 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.34 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:16:49.079836 1141442 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:16:49.081765 1141442 out.go:177] * Starting "multinode-229365" primary control-plane node in "multinode-229365" cluster
	I0318 13:16:49.082943 1141442 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:16:49.082977 1141442 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:16:49.082984 1141442 cache.go:56] Caching tarball of preloaded images
	I0318 13:16:49.083054 1141442 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:16:49.083065 1141442 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:16:49.083177 1141442 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/config.json ...
	I0318 13:16:49.083357 1141442 start.go:360] acquireMachinesLock for multinode-229365: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:16:49.083404 1141442 start.go:364] duration metric: took 29.291µs to acquireMachinesLock for "multinode-229365"
	I0318 13:16:49.083419 1141442 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:16:49.083427 1141442 fix.go:54] fixHost starting: 
	I0318 13:16:49.083689 1141442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:49.083721 1141442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:49.097816 1141442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45765
	I0318 13:16:49.098259 1141442 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:49.098792 1141442 main.go:141] libmachine: Using API Version  1
	I0318 13:16:49.098811 1141442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:49.099147 1141442 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:49.099405 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:16:49.099567 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetState
	I0318 13:16:49.101304 1141442 fix.go:112] recreateIfNeeded on multinode-229365: state=Running err=<nil>
	W0318 13:16:49.101322 1141442 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:16:49.103862 1141442 out.go:177] * Updating the running kvm2 "multinode-229365" VM ...
	I0318 13:16:49.105236 1141442 machine.go:94] provisionDockerMachine start ...
	I0318 13:16:49.105262 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:16:49.105474 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.107935 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.108318 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.108371 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.108523 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:16:49.108687 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.108836 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.108988 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:16:49.109152 1141442 main.go:141] libmachine: Using SSH client type: native
	I0318 13:16:49.109348 1141442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0318 13:16:49.109360 1141442 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:16:49.230253 1141442 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-229365
	
	I0318 13:16:49.230284 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetMachineName
	I0318 13:16:49.230543 1141442 buildroot.go:166] provisioning hostname "multinode-229365"
	I0318 13:16:49.230574 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetMachineName
	I0318 13:16:49.230753 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.233213 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.233646 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.233674 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.233832 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:16:49.234023 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.234185 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.234340 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:16:49.234526 1141442 main.go:141] libmachine: Using SSH client type: native
	I0318 13:16:49.234708 1141442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0318 13:16:49.234722 1141442 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-229365 && echo "multinode-229365" | sudo tee /etc/hostname
	I0318 13:16:49.368280 1141442 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-229365
	
	I0318 13:16:49.368307 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.371006 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.371328 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.371370 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.371519 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:16:49.371732 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.371916 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.372056 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:16:49.372214 1141442 main.go:141] libmachine: Using SSH client type: native
	I0318 13:16:49.372415 1141442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0318 13:16:49.372433 1141442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-229365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-229365/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-229365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:16:49.486087 1141442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:16:49.486120 1141442 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:16:49.486137 1141442 buildroot.go:174] setting up certificates
	I0318 13:16:49.486147 1141442 provision.go:84] configureAuth start
	I0318 13:16:49.486157 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetMachineName
	I0318 13:16:49.486442 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetIP
	I0318 13:16:49.489153 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.489536 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.489558 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.489680 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.491759 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.492122 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.492157 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.492271 1141442 provision.go:143] copyHostCerts
	I0318 13:16:49.492309 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:16:49.492368 1141442 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:16:49.492381 1141442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:16:49.492453 1141442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:16:49.492545 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:16:49.492574 1141442 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:16:49.492581 1141442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:16:49.492608 1141442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:16:49.492664 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:16:49.492680 1141442 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:16:49.492686 1141442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:16:49.492725 1141442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:16:49.492786 1141442 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.multinode-229365 san=[127.0.0.1 192.168.39.156 localhost minikube multinode-229365]
	I0318 13:16:49.636859 1141442 provision.go:177] copyRemoteCerts
	I0318 13:16:49.636933 1141442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:16:49.636960 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.639431 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.639822 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.639854 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.639993 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:16:49.640176 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.640346 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:16:49.640526 1141442 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/multinode-229365/id_rsa Username:docker}
	I0318 13:16:49.727540 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:16:49.727622 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:16:49.757017 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:16:49.757083 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 13:16:49.785812 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:16:49.785889 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:16:49.826057 1141442 provision.go:87] duration metric: took 339.898381ms to configureAuth
	I0318 13:16:49.826088 1141442 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:16:49.826345 1141442 config.go:182] Loaded profile config "multinode-229365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:16:49.826468 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.829111 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.829483 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.829515 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.829676 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:16:49.829856 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.830034 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.830156 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:16:49.830295 1141442 main.go:141] libmachine: Using SSH client type: native
	I0318 13:16:49.830508 1141442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0318 13:16:49.830532 1141442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:18:20.795134 1141442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:18:20.795167 1141442 machine.go:97] duration metric: took 1m31.689914979s to provisionDockerMachine
	I0318 13:18:20.795184 1141442 start.go:293] postStartSetup for "multinode-229365" (driver="kvm2")
	I0318 13:18:20.795201 1141442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:18:20.795227 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:18:20.795643 1141442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:18:20.795686 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:18:20.799253 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:20.799607 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:20.799641 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:20.799825 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:18:20.800004 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:18:20.800154 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:18:20.800274 1141442 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/multinode-229365/id_rsa Username:docker}
	I0318 13:18:20.890032 1141442 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:18:20.894791 1141442 command_runner.go:130] > NAME=Buildroot
	I0318 13:18:20.894806 1141442 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 13:18:20.894810 1141442 command_runner.go:130] > ID=buildroot
	I0318 13:18:20.894815 1141442 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 13:18:20.894819 1141442 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 13:18:20.894858 1141442 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:18:20.894868 1141442 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:18:20.894923 1141442 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:18:20.895012 1141442 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:18:20.895027 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /etc/ssl/certs/11141362.pem
	I0318 13:18:20.895107 1141442 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:18:20.906001 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:18:20.942130 1141442 start.go:296] duration metric: took 146.927864ms for postStartSetup
	I0318 13:18:20.942229 1141442 fix.go:56] duration metric: took 1m31.858795919s for fixHost
	I0318 13:18:20.942264 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:18:20.945363 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:20.945810 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:20.945844 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:20.945999 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:18:20.946219 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:18:20.946403 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:18:20.946552 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:18:20.946721 1141442 main.go:141] libmachine: Using SSH client type: native
	I0318 13:18:20.946906 1141442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0318 13:18:20.946917 1141442 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:18:21.065411 1141442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767901.042165387
	
	I0318 13:18:21.065436 1141442 fix.go:216] guest clock: 1710767901.042165387
	I0318 13:18:21.065444 1141442 fix.go:229] Guest: 2024-03-18 13:18:21.042165387 +0000 UTC Remote: 2024-03-18 13:18:20.942240728 +0000 UTC m=+91.997198087 (delta=99.924659ms)
	I0318 13:18:21.065478 1141442 fix.go:200] guest clock delta is within tolerance: 99.924659ms
	I0318 13:18:21.065486 1141442 start.go:83] releasing machines lock for "multinode-229365", held for 1m31.982073828s
	I0318 13:18:21.065508 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:18:21.065795 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetIP
	I0318 13:18:21.068395 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.068780 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:21.068803 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.069024 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:18:21.069614 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:18:21.069811 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:18:21.069933 1141442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:18:21.069989 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:18:21.070012 1141442 ssh_runner.go:195] Run: cat /version.json
	I0318 13:18:21.070034 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:18:21.072498 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.072704 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.072869 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:21.072908 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.073059 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:21.073077 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.073093 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:18:21.073282 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:18:21.073284 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:18:21.073499 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:18:21.073503 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:18:21.073674 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:18:21.073671 1141442 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/multinode-229365/id_rsa Username:docker}
	I0318 13:18:21.073822 1141442 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/multinode-229365/id_rsa Username:docker}
	I0318 13:18:21.154411 1141442 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 13:18:21.154669 1141442 ssh_runner.go:195] Run: systemctl --version
	I0318 13:18:21.177544 1141442 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 13:18:21.177584 1141442 command_runner.go:130] > systemd 252 (252)
	I0318 13:18:21.177620 1141442 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 13:18:21.177682 1141442 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:18:21.345230 1141442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 13:18:21.354111 1141442 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 13:18:21.354531 1141442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:18:21.354578 1141442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:18:21.364920 1141442 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 13:18:21.364938 1141442 start.go:494] detecting cgroup driver to use...
	I0318 13:18:21.365009 1141442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:18:21.381721 1141442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:18:21.397132 1141442 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:18:21.397176 1141442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:18:21.413016 1141442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:18:21.428564 1141442 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:18:21.621619 1141442 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:18:21.768005 1141442 docker.go:233] disabling docker service ...
	I0318 13:18:21.768076 1141442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:18:21.785019 1141442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:18:21.799512 1141442 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:18:21.947888 1141442 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:18:22.095600 1141442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:18:22.110755 1141442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:18:22.133051 1141442 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0318 13:18:22.133103 1141442 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:18:22.133163 1141442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:18:22.145241 1141442 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:18:22.145302 1141442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:18:22.156819 1141442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:18:22.168163 1141442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:18:22.179356 1141442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:18:22.190845 1141442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:18:22.200812 1141442 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 13:18:22.201045 1141442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:18:22.211270 1141442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:18:22.358796 1141442 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:18:22.631120 1141442 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:18:22.631186 1141442 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:18:22.636745 1141442 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0318 13:18:22.636766 1141442 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 13:18:22.636773 1141442 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I0318 13:18:22.636779 1141442 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:18:22.636784 1141442 command_runner.go:130] > Access: 2024-03-18 13:18:22.496511143 +0000
	I0318 13:18:22.636790 1141442 command_runner.go:130] > Modify: 2024-03-18 13:18:22.496511143 +0000
	I0318 13:18:22.636805 1141442 command_runner.go:130] > Change: 2024-03-18 13:18:22.496511143 +0000
	I0318 13:18:22.636819 1141442 command_runner.go:130] >  Birth: -
	I0318 13:18:22.637137 1141442 start.go:562] Will wait 60s for crictl version
	I0318 13:18:22.637188 1141442 ssh_runner.go:195] Run: which crictl
	I0318 13:18:22.641508 1141442 command_runner.go:130] > /usr/bin/crictl
	I0318 13:18:22.641646 1141442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:18:22.682085 1141442 command_runner.go:130] > Version:  0.1.0
	I0318 13:18:22.682114 1141442 command_runner.go:130] > RuntimeName:  cri-o
	I0318 13:18:22.682121 1141442 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0318 13:18:22.682130 1141442 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 13:18:22.683255 1141442 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:18:22.683337 1141442 ssh_runner.go:195] Run: crio --version
	I0318 13:18:22.713325 1141442 command_runner.go:130] > crio version 1.29.1
	I0318 13:18:22.713347 1141442 command_runner.go:130] > Version:        1.29.1
	I0318 13:18:22.713355 1141442 command_runner.go:130] > GitCommit:      unknown
	I0318 13:18:22.713362 1141442 command_runner.go:130] > GitCommitDate:  unknown
	I0318 13:18:22.713374 1141442 command_runner.go:130] > GitTreeState:   clean
	I0318 13:18:22.713383 1141442 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0318 13:18:22.713388 1141442 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 13:18:22.713394 1141442 command_runner.go:130] > Compiler:       gc
	I0318 13:18:22.713400 1141442 command_runner.go:130] > Platform:       linux/amd64
	I0318 13:18:22.713411 1141442 command_runner.go:130] > Linkmode:       dynamic
	I0318 13:18:22.713420 1141442 command_runner.go:130] > BuildTags:      
	I0318 13:18:22.713429 1141442 command_runner.go:130] >   containers_image_ostree_stub
	I0318 13:18:22.713439 1141442 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 13:18:22.713449 1141442 command_runner.go:130] >   btrfs_noversion
	I0318 13:18:22.713460 1141442 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 13:18:22.713469 1141442 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 13:18:22.713477 1141442 command_runner.go:130] >   seccomp
	I0318 13:18:22.713486 1141442 command_runner.go:130] > LDFlags:          unknown
	I0318 13:18:22.713496 1141442 command_runner.go:130] > SeccompEnabled:   true
	I0318 13:18:22.713504 1141442 command_runner.go:130] > AppArmorEnabled:  false
	I0318 13:18:22.713589 1141442 ssh_runner.go:195] Run: crio --version
	I0318 13:18:22.744042 1141442 command_runner.go:130] > crio version 1.29.1
	I0318 13:18:22.744067 1141442 command_runner.go:130] > Version:        1.29.1
	I0318 13:18:22.744076 1141442 command_runner.go:130] > GitCommit:      unknown
	I0318 13:18:22.744083 1141442 command_runner.go:130] > GitCommitDate:  unknown
	I0318 13:18:22.744088 1141442 command_runner.go:130] > GitTreeState:   clean
	I0318 13:18:22.744097 1141442 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0318 13:18:22.744102 1141442 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 13:18:22.744108 1141442 command_runner.go:130] > Compiler:       gc
	I0318 13:18:22.744115 1141442 command_runner.go:130] > Platform:       linux/amd64
	I0318 13:18:22.744122 1141442 command_runner.go:130] > Linkmode:       dynamic
	I0318 13:18:22.744133 1141442 command_runner.go:130] > BuildTags:      
	I0318 13:18:22.744141 1141442 command_runner.go:130] >   containers_image_ostree_stub
	I0318 13:18:22.744150 1141442 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 13:18:22.744175 1141442 command_runner.go:130] >   btrfs_noversion
	I0318 13:18:22.744187 1141442 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 13:18:22.744194 1141442 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 13:18:22.744201 1141442 command_runner.go:130] >   seccomp
	I0318 13:18:22.744212 1141442 command_runner.go:130] > LDFlags:          unknown
	I0318 13:18:22.744221 1141442 command_runner.go:130] > SeccompEnabled:   true
	I0318 13:18:22.744229 1141442 command_runner.go:130] > AppArmorEnabled:  false
	I0318 13:18:22.747434 1141442 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:18:22.749136 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetIP
	I0318 13:18:22.751727 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:22.752109 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:22.752142 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:22.752297 1141442 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:18:22.757295 1141442 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0318 13:18:22.757477 1141442 kubeadm.go:877] updating cluster {Name:multinode-229365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-229365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.34 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:18:22.757667 1141442 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:18:22.757744 1141442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:18:22.816488 1141442 command_runner.go:130] > {
	I0318 13:18:22.816513 1141442 command_runner.go:130] >   "images": [
	I0318 13:18:22.816517 1141442 command_runner.go:130] >     {
	I0318 13:18:22.816534 1141442 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 13:18:22.816542 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.816555 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 13:18:22.816560 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816567 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.816580 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 13:18:22.816593 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 13:18:22.816597 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816601 1141442 command_runner.go:130] >       "size": "65258016",
	I0318 13:18:22.816611 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.816618 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.816629 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.816640 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.816648 1141442 command_runner.go:130] >     },
	I0318 13:18:22.816653 1141442 command_runner.go:130] >     {
	I0318 13:18:22.816663 1141442 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 13:18:22.816667 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.816672 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 13:18:22.816678 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816682 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.816691 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 13:18:22.816701 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 13:18:22.816710 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816717 1141442 command_runner.go:130] >       "size": "65291810",
	I0318 13:18:22.816726 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.816738 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.816748 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.816754 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.816763 1141442 command_runner.go:130] >     },
	I0318 13:18:22.816769 1141442 command_runner.go:130] >     {
	I0318 13:18:22.816785 1141442 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 13:18:22.816792 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.816797 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 13:18:22.816803 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816812 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.816821 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 13:18:22.816831 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 13:18:22.816837 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816841 1141442 command_runner.go:130] >       "size": "1363676",
	I0318 13:18:22.816845 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.816851 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.816854 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.816859 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.816868 1141442 command_runner.go:130] >     },
	I0318 13:18:22.816873 1141442 command_runner.go:130] >     {
	I0318 13:18:22.816885 1141442 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 13:18:22.816895 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.816904 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 13:18:22.816913 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816920 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.816936 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 13:18:22.816957 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 13:18:22.816976 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816980 1141442 command_runner.go:130] >       "size": "31470524",
	I0318 13:18:22.816984 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.816988 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.816991 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.816995 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.816998 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817001 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817007 1141442 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 13:18:22.817011 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817015 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 13:18:22.817018 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817022 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817029 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 13:18:22.817043 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 13:18:22.817049 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817053 1141442 command_runner.go:130] >       "size": "53621675",
	I0318 13:18:22.817056 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.817060 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817064 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817068 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817074 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817080 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817088 1141442 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 13:18:22.817093 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817097 1141442 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 13:18:22.817101 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817105 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817113 1141442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 13:18:22.817120 1141442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 13:18:22.817125 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817129 1141442 command_runner.go:130] >       "size": "295456551",
	I0318 13:18:22.817135 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.817139 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.817145 1141442 command_runner.go:130] >       },
	I0318 13:18:22.817149 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817153 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817157 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817160 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817163 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817169 1141442 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 13:18:22.817175 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817180 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 13:18:22.817183 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817187 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817201 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 13:18:22.817210 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 13:18:22.817214 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817218 1141442 command_runner.go:130] >       "size": "127226832",
	I0318 13:18:22.817221 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.817229 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.817235 1141442 command_runner.go:130] >       },
	I0318 13:18:22.817239 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817243 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817246 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817249 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817252 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817258 1141442 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 13:18:22.817265 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817270 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 13:18:22.817277 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817281 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817305 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 13:18:22.817316 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 13:18:22.817319 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817323 1141442 command_runner.go:130] >       "size": "123261750",
	I0318 13:18:22.817329 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.817333 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.817337 1141442 command_runner.go:130] >       },
	I0318 13:18:22.817341 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817344 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817348 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817351 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817354 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817360 1141442 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 13:18:22.817365 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817370 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 13:18:22.817374 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817377 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817384 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 13:18:22.817391 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 13:18:22.817394 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817398 1141442 command_runner.go:130] >       "size": "74749335",
	I0318 13:18:22.817401 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.817405 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817408 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817417 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817420 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817423 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817428 1141442 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 13:18:22.817432 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817437 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 13:18:22.817440 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817444 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817450 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 13:18:22.817457 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 13:18:22.817462 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817466 1141442 command_runner.go:130] >       "size": "61551410",
	I0318 13:18:22.817470 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.817475 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.817478 1141442 command_runner.go:130] >       },
	I0318 13:18:22.817482 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817488 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817492 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817498 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817501 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817507 1141442 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 13:18:22.817513 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817517 1141442 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 13:18:22.817520 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817524 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817531 1141442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 13:18:22.817538 1141442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 13:18:22.817543 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817547 1141442 command_runner.go:130] >       "size": "750414",
	I0318 13:18:22.817553 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.817557 1141442 command_runner.go:130] >         "value": "65535"
	I0318 13:18:22.817560 1141442 command_runner.go:130] >       },
	I0318 13:18:22.817564 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817568 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817577 1141442 command_runner.go:130] >       "pinned": true
	I0318 13:18:22.817582 1141442 command_runner.go:130] >     }
	I0318 13:18:22.817590 1141442 command_runner.go:130] >   ]
	I0318 13:18:22.817596 1141442 command_runner.go:130] > }
	I0318 13:18:22.817794 1141442 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:18:22.817807 1141442 crio.go:415] Images already preloaded, skipping extraction
	I0318 13:18:22.817854 1141442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:18:22.858020 1141442 command_runner.go:130] > {
	I0318 13:18:22.858044 1141442 command_runner.go:130] >   "images": [
	I0318 13:18:22.858048 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858058 1141442 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 13:18:22.858063 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858068 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 13:18:22.858072 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858076 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858089 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 13:18:22.858112 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 13:18:22.858124 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858130 1141442 command_runner.go:130] >       "size": "65258016",
	I0318 13:18:22.858135 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.858142 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858150 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858157 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858161 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858167 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858176 1141442 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 13:18:22.858185 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858193 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 13:18:22.858202 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858209 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858223 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 13:18:22.858232 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 13:18:22.858236 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858240 1141442 command_runner.go:130] >       "size": "65291810",
	I0318 13:18:22.858248 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.858262 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858279 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858288 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858294 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858301 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858312 1141442 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 13:18:22.858322 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858330 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 13:18:22.858338 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858343 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858365 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 13:18:22.858380 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 13:18:22.858386 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858397 1141442 command_runner.go:130] >       "size": "1363676",
	I0318 13:18:22.858404 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.858414 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858424 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858434 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858442 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858448 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858454 1141442 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 13:18:22.858464 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858473 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 13:18:22.858482 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858489 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858504 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 13:18:22.858529 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 13:18:22.858541 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858548 1141442 command_runner.go:130] >       "size": "31470524",
	I0318 13:18:22.858554 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.858560 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858566 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858576 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858582 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858591 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858601 1141442 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 13:18:22.858610 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858625 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 13:18:22.858634 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858640 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858651 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 13:18:22.858666 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 13:18:22.858675 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858682 1141442 command_runner.go:130] >       "size": "53621675",
	I0318 13:18:22.858693 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.858702 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858710 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858719 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858727 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858733 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858746 1141442 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 13:18:22.858753 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858759 1141442 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 13:18:22.858768 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858775 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858789 1141442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 13:18:22.858804 1141442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 13:18:22.858812 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858820 1141442 command_runner.go:130] >       "size": "295456551",
	I0318 13:18:22.858829 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.858833 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.858843 1141442 command_runner.go:130] >       },
	I0318 13:18:22.858850 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858857 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858867 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858872 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858882 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858892 1141442 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 13:18:22.858901 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858913 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 13:18:22.858919 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858928 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858938 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 13:18:22.858961 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 13:18:22.858971 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858978 1141442 command_runner.go:130] >       "size": "127226832",
	I0318 13:18:22.858987 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.858996 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.859002 1141442 command_runner.go:130] >       },
	I0318 13:18:22.859011 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.859019 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.859028 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.859034 1141442 command_runner.go:130] >     },
	I0318 13:18:22.859041 1141442 command_runner.go:130] >     {
	I0318 13:18:22.859047 1141442 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 13:18:22.859056 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.859065 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 13:18:22.859075 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859082 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.859113 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 13:18:22.859129 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 13:18:22.859137 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859143 1141442 command_runner.go:130] >       "size": "123261750",
	I0318 13:18:22.859150 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.859155 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.859163 1141442 command_runner.go:130] >       },
	I0318 13:18:22.859170 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.859180 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.859186 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.859196 1141442 command_runner.go:130] >     },
	I0318 13:18:22.859202 1141442 command_runner.go:130] >     {
	I0318 13:18:22.859215 1141442 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 13:18:22.859224 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.859232 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 13:18:22.859241 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859248 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.859258 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 13:18:22.859274 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 13:18:22.859286 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859299 1141442 command_runner.go:130] >       "size": "74749335",
	I0318 13:18:22.859309 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.859316 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.859325 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.859331 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.859339 1141442 command_runner.go:130] >     },
	I0318 13:18:22.859345 1141442 command_runner.go:130] >     {
	I0318 13:18:22.859359 1141442 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 13:18:22.859363 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.859374 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 13:18:22.859380 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859388 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.859401 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 13:18:22.859416 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 13:18:22.859425 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859431 1141442 command_runner.go:130] >       "size": "61551410",
	I0318 13:18:22.859440 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.859444 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.859448 1141442 command_runner.go:130] >       },
	I0318 13:18:22.859454 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.859461 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.859472 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.859478 1141442 command_runner.go:130] >     },
	I0318 13:18:22.859487 1141442 command_runner.go:130] >     {
	I0318 13:18:22.859497 1141442 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 13:18:22.859507 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.859515 1141442 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 13:18:22.859523 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859530 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.859543 1141442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 13:18:22.859552 1141442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 13:18:22.859558 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859569 1141442 command_runner.go:130] >       "size": "750414",
	I0318 13:18:22.859579 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.859586 1141442 command_runner.go:130] >         "value": "65535"
	I0318 13:18:22.859594 1141442 command_runner.go:130] >       },
	I0318 13:18:22.859607 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.859616 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.859623 1141442 command_runner.go:130] >       "pinned": true
	I0318 13:18:22.859630 1141442 command_runner.go:130] >     }
	I0318 13:18:22.859634 1141442 command_runner.go:130] >   ]
	I0318 13:18:22.859637 1141442 command_runner.go:130] > }
	I0318 13:18:22.859778 1141442 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:18:22.859791 1141442 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:18:22.859798 1141442 kubeadm.go:928] updating node { 192.168.39.156 8443 v1.28.4 crio true true} ...
	I0318 13:18:22.859950 1141442 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-229365 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-229365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:18:22.860048 1141442 ssh_runner.go:195] Run: crio config
	I0318 13:18:22.903754 1141442 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0318 13:18:22.903779 1141442 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0318 13:18:22.903786 1141442 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0318 13:18:22.903790 1141442 command_runner.go:130] > #
	I0318 13:18:22.903811 1141442 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0318 13:18:22.903821 1141442 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0318 13:18:22.903838 1141442 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0318 13:18:22.903856 1141442 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0318 13:18:22.903862 1141442 command_runner.go:130] > # reload'.
	I0318 13:18:22.903891 1141442 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0318 13:18:22.903904 1141442 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0318 13:18:22.903916 1141442 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0318 13:18:22.903928 1141442 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0318 13:18:22.903936 1141442 command_runner.go:130] > [crio]
	I0318 13:18:22.903945 1141442 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0318 13:18:22.903955 1141442 command_runner.go:130] > # containers images, in this directory.
	I0318 13:18:22.903964 1141442 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0318 13:18:22.903983 1141442 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0318 13:18:22.903996 1141442 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0318 13:18:22.904009 1141442 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0318 13:18:22.904017 1141442 command_runner.go:130] > # imagestore = ""
	I0318 13:18:22.904027 1141442 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0318 13:18:22.904035 1141442 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0318 13:18:22.904042 1141442 command_runner.go:130] > storage_driver = "overlay"
	I0318 13:18:22.904050 1141442 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0318 13:18:22.904059 1141442 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0318 13:18:22.904067 1141442 command_runner.go:130] > storage_option = [
	I0318 13:18:22.904074 1141442 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0318 13:18:22.904082 1141442 command_runner.go:130] > ]
	I0318 13:18:22.904092 1141442 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0318 13:18:22.904111 1141442 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0318 13:18:22.904122 1141442 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0318 13:18:22.904130 1141442 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0318 13:18:22.904140 1141442 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0318 13:18:22.904148 1141442 command_runner.go:130] > # always happen on a node reboot
	I0318 13:18:22.904159 1141442 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0318 13:18:22.904176 1141442 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0318 13:18:22.904188 1141442 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0318 13:18:22.904195 1141442 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0318 13:18:22.904206 1141442 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0318 13:18:22.904219 1141442 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0318 13:18:22.904234 1141442 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0318 13:18:22.904241 1141442 command_runner.go:130] > # internal_wipe = true
	I0318 13:18:22.904257 1141442 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0318 13:18:22.904269 1141442 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0318 13:18:22.904285 1141442 command_runner.go:130] > # internal_repair = false
	I0318 13:18:22.904297 1141442 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0318 13:18:22.904311 1141442 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0318 13:18:22.904336 1141442 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0318 13:18:22.904349 1141442 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0318 13:18:22.904362 1141442 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0318 13:18:22.904369 1141442 command_runner.go:130] > [crio.api]
	I0318 13:18:22.904378 1141442 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0318 13:18:22.904388 1141442 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0318 13:18:22.904399 1141442 command_runner.go:130] > # IP address on which the stream server will listen.
	I0318 13:18:22.904410 1141442 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0318 13:18:22.904421 1141442 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0318 13:18:22.904431 1141442 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0318 13:18:22.904438 1141442 command_runner.go:130] > # stream_port = "0"
	I0318 13:18:22.904450 1141442 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0318 13:18:22.904457 1141442 command_runner.go:130] > # stream_enable_tls = false
	I0318 13:18:22.904467 1141442 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0318 13:18:22.904476 1141442 command_runner.go:130] > # stream_idle_timeout = ""
	I0318 13:18:22.904486 1141442 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0318 13:18:22.904498 1141442 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0318 13:18:22.904507 1141442 command_runner.go:130] > # minutes.
	I0318 13:18:22.904517 1141442 command_runner.go:130] > # stream_tls_cert = ""
	I0318 13:18:22.904532 1141442 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0318 13:18:22.904544 1141442 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0318 13:18:22.904554 1141442 command_runner.go:130] > # stream_tls_key = ""
	I0318 13:18:22.904562 1141442 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0318 13:18:22.904571 1141442 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0318 13:18:22.904588 1141442 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0318 13:18:22.904594 1141442 command_runner.go:130] > # stream_tls_ca = ""
	I0318 13:18:22.904601 1141442 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 13:18:22.904608 1141442 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0318 13:18:22.904615 1141442 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 13:18:22.904622 1141442 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0318 13:18:22.904628 1141442 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0318 13:18:22.904638 1141442 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0318 13:18:22.904647 1141442 command_runner.go:130] > [crio.runtime]
	I0318 13:18:22.904662 1141442 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0318 13:18:22.904675 1141442 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0318 13:18:22.904684 1141442 command_runner.go:130] > # "nofile=1024:2048"
	I0318 13:18:22.904693 1141442 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0318 13:18:22.904704 1141442 command_runner.go:130] > # default_ulimits = [
	I0318 13:18:22.904709 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.904722 1141442 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0318 13:18:22.904729 1141442 command_runner.go:130] > # no_pivot = false
	I0318 13:18:22.904739 1141442 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0318 13:18:22.904750 1141442 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0318 13:18:22.904758 1141442 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0318 13:18:22.904763 1141442 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0318 13:18:22.904770 1141442 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0318 13:18:22.904776 1141442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 13:18:22.904783 1141442 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0318 13:18:22.904787 1141442 command_runner.go:130] > # Cgroup setting for conmon
	I0318 13:18:22.904796 1141442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0318 13:18:22.904800 1141442 command_runner.go:130] > conmon_cgroup = "pod"
	I0318 13:18:22.904807 1141442 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0318 13:18:22.904812 1141442 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0318 13:18:22.904821 1141442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 13:18:22.904826 1141442 command_runner.go:130] > conmon_env = [
	I0318 13:18:22.904838 1141442 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 13:18:22.904851 1141442 command_runner.go:130] > ]
	I0318 13:18:22.904864 1141442 command_runner.go:130] > # Additional environment variables to set for all the
	I0318 13:18:22.904876 1141442 command_runner.go:130] > # containers. These are overridden if set in the
	I0318 13:18:22.904888 1141442 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0318 13:18:22.904897 1141442 command_runner.go:130] > # default_env = [
	I0318 13:18:22.904902 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.904914 1141442 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0318 13:18:22.904927 1141442 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0318 13:18:22.904937 1141442 command_runner.go:130] > # selinux = false
	I0318 13:18:22.904947 1141442 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0318 13:18:22.905694 1141442 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0318 13:18:22.905721 1141442 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0318 13:18:22.905729 1141442 command_runner.go:130] > # seccomp_profile = ""
	I0318 13:18:22.905739 1141442 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0318 13:18:22.905756 1141442 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0318 13:18:22.905767 1141442 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0318 13:18:22.905774 1141442 command_runner.go:130] > # which might increase security.
	I0318 13:18:22.905787 1141442 command_runner.go:130] > # This option is currently deprecated,
	I0318 13:18:22.905809 1141442 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0318 13:18:22.905819 1141442 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0318 13:18:22.905835 1141442 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0318 13:18:22.905845 1141442 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0318 13:18:22.905862 1141442 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0318 13:18:22.905878 1141442 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0318 13:18:22.905887 1141442 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:18:22.905894 1141442 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0318 13:18:22.905909 1141442 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0318 13:18:22.905916 1141442 command_runner.go:130] > # the cgroup blockio controller.
	I0318 13:18:22.905922 1141442 command_runner.go:130] > # blockio_config_file = ""
	I0318 13:18:22.905932 1141442 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0318 13:18:22.905944 1141442 command_runner.go:130] > # blockio parameters.
	I0318 13:18:22.905950 1141442 command_runner.go:130] > # blockio_reload = false
	I0318 13:18:22.905961 1141442 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0318 13:18:22.905967 1141442 command_runner.go:130] > # irqbalance daemon.
	I0318 13:18:22.905982 1141442 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0318 13:18:22.905991 1141442 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0318 13:18:22.906001 1141442 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0318 13:18:22.906018 1141442 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0318 13:18:22.906031 1141442 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0318 13:18:22.906047 1141442 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0318 13:18:22.906054 1141442 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:18:22.906060 1141442 command_runner.go:130] > # rdt_config_file = ""
	I0318 13:18:22.906068 1141442 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0318 13:18:22.906080 1141442 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0318 13:18:22.906116 1141442 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0318 13:18:22.906122 1141442 command_runner.go:130] > # separate_pull_cgroup = ""
	I0318 13:18:22.906138 1141442 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0318 13:18:22.906150 1141442 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0318 13:18:22.906156 1141442 command_runner.go:130] > # will be added.
	I0318 13:18:22.906162 1141442 command_runner.go:130] > # default_capabilities = [
	I0318 13:18:22.906168 1141442 command_runner.go:130] > # 	"CHOWN",
	I0318 13:18:22.906178 1141442 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0318 13:18:22.906184 1141442 command_runner.go:130] > # 	"FSETID",
	I0318 13:18:22.906189 1141442 command_runner.go:130] > # 	"FOWNER",
	I0318 13:18:22.906200 1141442 command_runner.go:130] > # 	"SETGID",
	I0318 13:18:22.906206 1141442 command_runner.go:130] > # 	"SETUID",
	I0318 13:18:22.906211 1141442 command_runner.go:130] > # 	"SETPCAP",
	I0318 13:18:22.906223 1141442 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0318 13:18:22.906229 1141442 command_runner.go:130] > # 	"KILL",
	I0318 13:18:22.906233 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906248 1141442 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0318 13:18:22.906262 1141442 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0318 13:18:22.906277 1141442 command_runner.go:130] > # add_inheritable_capabilities = false
	I0318 13:18:22.906286 1141442 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0318 13:18:22.906299 1141442 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 13:18:22.906305 1141442 command_runner.go:130] > # default_sysctls = [
	I0318 13:18:22.906310 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906316 1141442 command_runner.go:130] > # List of devices on the host that a
	I0318 13:18:22.906330 1141442 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0318 13:18:22.906335 1141442 command_runner.go:130] > # allowed_devices = [
	I0318 13:18:22.906341 1141442 command_runner.go:130] > # 	"/dev/fuse",
	I0318 13:18:22.906346 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906354 1141442 command_runner.go:130] > # List of additional devices. specified as
	I0318 13:18:22.906369 1141442 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0318 13:18:22.906377 1141442 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0318 13:18:22.906385 1141442 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 13:18:22.906396 1141442 command_runner.go:130] > # additional_devices = [
	I0318 13:18:22.906401 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906409 1141442 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0318 13:18:22.906414 1141442 command_runner.go:130] > # cdi_spec_dirs = [
	I0318 13:18:22.906430 1141442 command_runner.go:130] > # 	"/etc/cdi",
	I0318 13:18:22.906439 1141442 command_runner.go:130] > # 	"/var/run/cdi",
	I0318 13:18:22.906445 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906458 1141442 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0318 13:18:22.906468 1141442 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0318 13:18:22.906519 1141442 command_runner.go:130] > # Defaults to false.
	I0318 13:18:22.906549 1141442 command_runner.go:130] > # device_ownership_from_security_context = false
	I0318 13:18:22.906564 1141442 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0318 13:18:22.906582 1141442 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0318 13:18:22.906588 1141442 command_runner.go:130] > # hooks_dir = [
	I0318 13:18:22.906597 1141442 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0318 13:18:22.906601 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906615 1141442 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0318 13:18:22.906623 1141442 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0318 13:18:22.906631 1141442 command_runner.go:130] > # its default mounts from the following two files:
	I0318 13:18:22.906635 1141442 command_runner.go:130] > #
	I0318 13:18:22.906649 1141442 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0318 13:18:22.906658 1141442 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0318 13:18:22.906666 1141442 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0318 13:18:22.906684 1141442 command_runner.go:130] > #
	I0318 13:18:22.906694 1141442 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0318 13:18:22.906706 1141442 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0318 13:18:22.906725 1141442 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0318 13:18:22.906737 1141442 command_runner.go:130] > #      only add mounts it finds in this file.
	I0318 13:18:22.906744 1141442 command_runner.go:130] > #
	I0318 13:18:22.906756 1141442 command_runner.go:130] > # default_mounts_file = ""
	I0318 13:18:22.906772 1141442 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0318 13:18:22.906791 1141442 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0318 13:18:22.906799 1141442 command_runner.go:130] > pids_limit = 1024
	I0318 13:18:22.906815 1141442 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0318 13:18:22.906828 1141442 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0318 13:18:22.906837 1141442 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0318 13:18:22.906862 1141442 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0318 13:18:22.906872 1141442 command_runner.go:130] > # log_size_max = -1
	I0318 13:18:22.906883 1141442 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0318 13:18:22.906897 1141442 command_runner.go:130] > # log_to_journald = false
	I0318 13:18:22.906907 1141442 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0318 13:18:22.906917 1141442 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0318 13:18:22.906930 1141442 command_runner.go:130] > # Path to directory for container attach sockets.
	I0318 13:18:22.906944 1141442 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0318 13:18:22.906956 1141442 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0318 13:18:22.906974 1141442 command_runner.go:130] > # bind_mount_prefix = ""
	I0318 13:18:22.906991 1141442 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0318 13:18:22.907000 1141442 command_runner.go:130] > # read_only = false
	I0318 13:18:22.907010 1141442 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0318 13:18:22.907024 1141442 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0318 13:18:22.907034 1141442 command_runner.go:130] > # live configuration reload.
	I0318 13:18:22.907040 1141442 command_runner.go:130] > # log_level = "info"
	I0318 13:18:22.907052 1141442 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0318 13:18:22.907068 1141442 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:18:22.907077 1141442 command_runner.go:130] > # log_filter = ""
	I0318 13:18:22.907086 1141442 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0318 13:18:22.907103 1141442 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0318 13:18:22.907112 1141442 command_runner.go:130] > # separated by comma.
	I0318 13:18:22.907124 1141442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:18:22.907133 1141442 command_runner.go:130] > # uid_mappings = ""
	I0318 13:18:22.907147 1141442 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0318 13:18:22.907157 1141442 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0318 13:18:22.907163 1141442 command_runner.go:130] > # separated by comma.
	I0318 13:18:22.907183 1141442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:18:22.907192 1141442 command_runner.go:130] > # gid_mappings = ""
	I0318 13:18:22.907205 1141442 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0318 13:18:22.907223 1141442 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 13:18:22.907299 1141442 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 13:18:22.907641 1141442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:18:22.907653 1141442 command_runner.go:130] > # minimum_mappable_uid = -1
	I0318 13:18:22.907663 1141442 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0318 13:18:22.907673 1141442 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 13:18:22.907688 1141442 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 13:18:22.907706 1141442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:18:22.907713 1141442 command_runner.go:130] > # minimum_mappable_gid = -1
	I0318 13:18:22.907724 1141442 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0318 13:18:22.907738 1141442 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0318 13:18:22.907748 1141442 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0318 13:18:22.907757 1141442 command_runner.go:130] > # ctr_stop_timeout = 30
	I0318 13:18:22.907773 1141442 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0318 13:18:22.907786 1141442 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0318 13:18:22.907797 1141442 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0318 13:18:22.907809 1141442 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0318 13:18:22.907817 1141442 command_runner.go:130] > drop_infra_ctr = false
	I0318 13:18:22.907828 1141442 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0318 13:18:22.907841 1141442 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0318 13:18:22.907858 1141442 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0318 13:18:22.907870 1141442 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0318 13:18:22.907885 1141442 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0318 13:18:22.907899 1141442 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0318 13:18:22.907913 1141442 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0318 13:18:22.907926 1141442 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0318 13:18:22.907936 1141442 command_runner.go:130] > # shared_cpuset = ""
	I0318 13:18:22.907950 1141442 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0318 13:18:22.907961 1141442 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0318 13:18:22.907972 1141442 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0318 13:18:22.907988 1141442 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0318 13:18:22.907997 1141442 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0318 13:18:22.908010 1141442 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0318 13:18:22.908024 1141442 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0318 13:18:22.908036 1141442 command_runner.go:130] > # enable_criu_support = false
	I0318 13:18:22.908048 1141442 command_runner.go:130] > # Enable/disable the generation of the container,
	I0318 13:18:22.908062 1141442 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0318 13:18:22.908074 1141442 command_runner.go:130] > # enable_pod_events = false
	I0318 13:18:22.908084 1141442 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 13:18:22.908094 1141442 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 13:18:22.908099 1141442 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0318 13:18:22.908106 1141442 command_runner.go:130] > # default_runtime = "runc"
	I0318 13:18:22.908111 1141442 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0318 13:18:22.908121 1141442 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0318 13:18:22.908131 1141442 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0318 13:18:22.908138 1141442 command_runner.go:130] > # creation as a file is not desired either.
	I0318 13:18:22.908147 1141442 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0318 13:18:22.908158 1141442 command_runner.go:130] > # the hostname is being managed dynamically.
	I0318 13:18:22.908173 1141442 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0318 13:18:22.908182 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.908192 1141442 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0318 13:18:22.908206 1141442 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0318 13:18:22.908217 1141442 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0318 13:18:22.908223 1141442 command_runner.go:130] > # Each entry in the table should follow the format:
	I0318 13:18:22.908230 1141442 command_runner.go:130] > #
	I0318 13:18:22.908237 1141442 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0318 13:18:22.908253 1141442 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0318 13:18:22.908267 1141442 command_runner.go:130] > # runtime_type = "oci"
	I0318 13:18:22.908359 1141442 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0318 13:18:22.908377 1141442 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0318 13:18:22.908393 1141442 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0318 13:18:22.908410 1141442 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0318 13:18:22.908425 1141442 command_runner.go:130] > # monitor_env = []
	I0318 13:18:22.908439 1141442 command_runner.go:130] > # privileged_without_host_devices = false
	I0318 13:18:22.908456 1141442 command_runner.go:130] > # allowed_annotations = []
	I0318 13:18:22.908473 1141442 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0318 13:18:22.908487 1141442 command_runner.go:130] > # Where:
	I0318 13:18:22.908501 1141442 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0318 13:18:22.908523 1141442 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0318 13:18:22.908539 1141442 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0318 13:18:22.908552 1141442 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0318 13:18:22.908561 1141442 command_runner.go:130] > #   in $PATH.
	I0318 13:18:22.908572 1141442 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0318 13:18:22.908581 1141442 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0318 13:18:22.908589 1141442 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0318 13:18:22.908598 1141442 command_runner.go:130] > #   state.
	I0318 13:18:22.908609 1141442 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0318 13:18:22.908622 1141442 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0318 13:18:22.908636 1141442 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0318 13:18:22.908647 1141442 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0318 13:18:22.908660 1141442 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0318 13:18:22.908671 1141442 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0318 13:18:22.908680 1141442 command_runner.go:130] > #   The currently recognized values are:
	I0318 13:18:22.908693 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0318 13:18:22.908708 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0318 13:18:22.908722 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0318 13:18:22.908738 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0318 13:18:22.908754 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0318 13:18:22.908768 1141442 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0318 13:18:22.908778 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0318 13:18:22.908791 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0318 13:18:22.908805 1141442 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0318 13:18:22.908819 1141442 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0318 13:18:22.908830 1141442 command_runner.go:130] > #   deprecated option "conmon".
	I0318 13:18:22.908844 1141442 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0318 13:18:22.908856 1141442 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0318 13:18:22.908868 1141442 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0318 13:18:22.908876 1141442 command_runner.go:130] > #   should be moved to the container's cgroup
	I0318 13:18:22.908889 1141442 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0318 13:18:22.908901 1141442 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0318 13:18:22.908916 1141442 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0318 13:18:22.908928 1141442 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0318 13:18:22.908937 1141442 command_runner.go:130] > #
	I0318 13:18:22.908949 1141442 command_runner.go:130] > # Using the seccomp notifier feature:
	I0318 13:18:22.908956 1141442 command_runner.go:130] > #
	I0318 13:18:22.908967 1141442 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0318 13:18:22.908977 1141442 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0318 13:18:22.908985 1141442 command_runner.go:130] > #
	I0318 13:18:22.908999 1141442 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0318 13:18:22.909012 1141442 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0318 13:18:22.909021 1141442 command_runner.go:130] > #
	I0318 13:18:22.909034 1141442 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0318 13:18:22.909043 1141442 command_runner.go:130] > # feature.
	I0318 13:18:22.909051 1141442 command_runner.go:130] > #
	I0318 13:18:22.909062 1141442 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0318 13:18:22.909071 1141442 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0318 13:18:22.909084 1141442 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0318 13:18:22.909097 1141442 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0318 13:18:22.909111 1141442 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0318 13:18:22.909119 1141442 command_runner.go:130] > #
	I0318 13:18:22.909130 1141442 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0318 13:18:22.909143 1141442 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0318 13:18:22.909155 1141442 command_runner.go:130] > #
	I0318 13:18:22.909165 1141442 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0318 13:18:22.909176 1141442 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0318 13:18:22.909185 1141442 command_runner.go:130] > #
	I0318 13:18:22.909198 1141442 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0318 13:18:22.909211 1141442 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0318 13:18:22.909220 1141442 command_runner.go:130] > # limitation.
	I0318 13:18:22.909230 1141442 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0318 13:18:22.909240 1141442 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0318 13:18:22.909248 1141442 command_runner.go:130] > runtime_type = "oci"
	I0318 13:18:22.909255 1141442 command_runner.go:130] > runtime_root = "/run/runc"
	I0318 13:18:22.909262 1141442 command_runner.go:130] > runtime_config_path = ""
	I0318 13:18:22.909274 1141442 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0318 13:18:22.909284 1141442 command_runner.go:130] > monitor_cgroup = "pod"
	I0318 13:18:22.909294 1141442 command_runner.go:130] > monitor_exec_cgroup = ""
	I0318 13:18:22.909303 1141442 command_runner.go:130] > monitor_env = [
	I0318 13:18:22.909315 1141442 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 13:18:22.909323 1141442 command_runner.go:130] > ]
	I0318 13:18:22.909334 1141442 command_runner.go:130] > privileged_without_host_devices = false
	I0318 13:18:22.909343 1141442 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0318 13:18:22.909357 1141442 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0318 13:18:22.909370 1141442 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0318 13:18:22.909386 1141442 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0318 13:18:22.909401 1141442 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0318 13:18:22.909413 1141442 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0318 13:18:22.909431 1141442 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0318 13:18:22.909441 1141442 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0318 13:18:22.909453 1141442 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0318 13:18:22.909468 1141442 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0318 13:18:22.909485 1141442 command_runner.go:130] > # Example:
	I0318 13:18:22.909496 1141442 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0318 13:18:22.909507 1141442 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0318 13:18:22.909518 1141442 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0318 13:18:22.909529 1141442 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0318 13:18:22.909536 1141442 command_runner.go:130] > # cpuset = 0
	I0318 13:18:22.909541 1141442 command_runner.go:130] > # cpushares = "0-1"
	I0318 13:18:22.909545 1141442 command_runner.go:130] > # Where:
	I0318 13:18:22.909552 1141442 command_runner.go:130] > # The workload name is workload-type.
	I0318 13:18:22.909567 1141442 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0318 13:18:22.909577 1141442 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0318 13:18:22.909586 1141442 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0318 13:18:22.909599 1141442 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0318 13:18:22.909607 1141442 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0318 13:18:22.909615 1141442 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0318 13:18:22.909621 1141442 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0318 13:18:22.909625 1141442 command_runner.go:130] > # Default value is set to true
	I0318 13:18:22.909630 1141442 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0318 13:18:22.909642 1141442 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0318 13:18:22.909650 1141442 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0318 13:18:22.909658 1141442 command_runner.go:130] > # Default value is set to 'false'
	I0318 13:18:22.909664 1141442 command_runner.go:130] > # disable_hostport_mapping = false
	I0318 13:18:22.909674 1141442 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0318 13:18:22.909678 1141442 command_runner.go:130] > #
	I0318 13:18:22.909687 1141442 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0318 13:18:22.909697 1141442 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0318 13:18:22.909707 1141442 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0318 13:18:22.909715 1141442 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0318 13:18:22.909727 1141442 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0318 13:18:22.909736 1141442 command_runner.go:130] > [crio.image]
	I0318 13:18:22.909746 1141442 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0318 13:18:22.909756 1141442 command_runner.go:130] > # default_transport = "docker://"
	I0318 13:18:22.909769 1141442 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0318 13:18:22.909782 1141442 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0318 13:18:22.909791 1141442 command_runner.go:130] > # global_auth_file = ""
	I0318 13:18:22.909802 1141442 command_runner.go:130] > # The image used to instantiate infra containers.
	I0318 13:18:22.909816 1141442 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:18:22.909827 1141442 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0318 13:18:22.909841 1141442 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0318 13:18:22.909854 1141442 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0318 13:18:22.909865 1141442 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:18:22.909875 1141442 command_runner.go:130] > # pause_image_auth_file = ""
	I0318 13:18:22.909887 1141442 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0318 13:18:22.909899 1141442 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0318 13:18:22.909910 1141442 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0318 13:18:22.909925 1141442 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0318 13:18:22.909937 1141442 command_runner.go:130] > # pause_command = "/pause"
	I0318 13:18:22.909947 1141442 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0318 13:18:22.909960 1141442 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0318 13:18:22.909972 1141442 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0318 13:18:22.909984 1141442 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0318 13:18:22.909995 1141442 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0318 13:18:22.910005 1141442 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0318 13:18:22.910012 1141442 command_runner.go:130] > # pinned_images = [
	I0318 13:18:22.910021 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.910035 1141442 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0318 13:18:22.910048 1141442 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0318 13:18:22.910061 1141442 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0318 13:18:22.910074 1141442 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0318 13:18:22.910084 1141442 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0318 13:18:22.910094 1141442 command_runner.go:130] > # signature_policy = ""
	I0318 13:18:22.910103 1141442 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0318 13:18:22.910115 1141442 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0318 13:18:22.910129 1141442 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0318 13:18:22.910142 1141442 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0318 13:18:22.910154 1141442 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0318 13:18:22.910164 1141442 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0318 13:18:22.910177 1141442 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0318 13:18:22.910188 1141442 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0318 13:18:22.910195 1141442 command_runner.go:130] > # changing them here.
	I0318 13:18:22.910201 1141442 command_runner.go:130] > # insecure_registries = [
	I0318 13:18:22.910209 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.910232 1141442 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0318 13:18:22.910244 1141442 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0318 13:18:22.910253 1141442 command_runner.go:130] > # image_volumes = "mkdir"
	I0318 13:18:22.910264 1141442 command_runner.go:130] > # Temporary directory to use for storing big files
	I0318 13:18:22.910275 1141442 command_runner.go:130] > # big_files_temporary_dir = ""
	I0318 13:18:22.910287 1141442 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0318 13:18:22.910293 1141442 command_runner.go:130] > # CNI plugins.
	I0318 13:18:22.910298 1141442 command_runner.go:130] > [crio.network]
	I0318 13:18:22.910312 1141442 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0318 13:18:22.910324 1141442 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0318 13:18:22.910338 1141442 command_runner.go:130] > # cni_default_network = ""
	I0318 13:18:22.910355 1141442 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0318 13:18:22.910366 1141442 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0318 13:18:22.910378 1141442 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0318 13:18:22.910387 1141442 command_runner.go:130] > # plugin_dirs = [
	I0318 13:18:22.910394 1141442 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0318 13:18:22.910397 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.910406 1141442 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0318 13:18:22.910415 1141442 command_runner.go:130] > [crio.metrics]
	I0318 13:18:22.910424 1141442 command_runner.go:130] > # Globally enable or disable metrics support.
	I0318 13:18:22.910434 1141442 command_runner.go:130] > enable_metrics = true
	I0318 13:18:22.910445 1141442 command_runner.go:130] > # Specify enabled metrics collectors.
	I0318 13:18:22.910456 1141442 command_runner.go:130] > # Per default all metrics are enabled.
	I0318 13:18:22.910469 1141442 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0318 13:18:22.910481 1141442 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0318 13:18:22.910508 1141442 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0318 13:18:22.910530 1141442 command_runner.go:130] > # metrics_collectors = [
	I0318 13:18:22.910543 1141442 command_runner.go:130] > # 	"operations",
	I0318 13:18:22.910554 1141442 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0318 13:18:22.910564 1141442 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0318 13:18:22.910574 1141442 command_runner.go:130] > # 	"operations_errors",
	I0318 13:18:22.910584 1141442 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0318 13:18:22.910592 1141442 command_runner.go:130] > # 	"image_pulls_by_name",
	I0318 13:18:22.910601 1141442 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0318 13:18:22.910612 1141442 command_runner.go:130] > # 	"image_pulls_failures",
	I0318 13:18:22.910622 1141442 command_runner.go:130] > # 	"image_pulls_successes",
	I0318 13:18:22.910638 1141442 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0318 13:18:22.910648 1141442 command_runner.go:130] > # 	"image_layer_reuse",
	I0318 13:18:22.910662 1141442 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0318 13:18:22.910670 1141442 command_runner.go:130] > # 	"containers_oom_total",
	I0318 13:18:22.910677 1141442 command_runner.go:130] > # 	"containers_oom",
	I0318 13:18:22.910681 1141442 command_runner.go:130] > # 	"processes_defunct",
	I0318 13:18:22.910690 1141442 command_runner.go:130] > # 	"operations_total",
	I0318 13:18:22.910700 1141442 command_runner.go:130] > # 	"operations_latency_seconds",
	I0318 13:18:22.910711 1141442 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0318 13:18:22.910722 1141442 command_runner.go:130] > # 	"operations_errors_total",
	I0318 13:18:22.910732 1141442 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0318 13:18:22.910742 1141442 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0318 13:18:22.910752 1141442 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0318 13:18:22.910761 1141442 command_runner.go:130] > # 	"image_pulls_success_total",
	I0318 13:18:22.910769 1141442 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0318 13:18:22.910773 1141442 command_runner.go:130] > # 	"containers_oom_count_total",
	I0318 13:18:22.910783 1141442 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0318 13:18:22.910795 1141442 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0318 13:18:22.910800 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.910813 1141442 command_runner.go:130] > # The port on which the metrics server will listen.
	I0318 13:18:22.910826 1141442 command_runner.go:130] > # metrics_port = 9090
	I0318 13:18:22.910837 1141442 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0318 13:18:22.910847 1141442 command_runner.go:130] > # metrics_socket = ""
	I0318 13:18:22.910857 1141442 command_runner.go:130] > # The certificate for the secure metrics server.
	I0318 13:18:22.910867 1141442 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0318 13:18:22.910879 1141442 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0318 13:18:22.910890 1141442 command_runner.go:130] > # certificate on any modification event.
	I0318 13:18:22.910900 1141442 command_runner.go:130] > # metrics_cert = ""
	I0318 13:18:22.910911 1141442 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0318 13:18:22.910921 1141442 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0318 13:18:22.910931 1141442 command_runner.go:130] > # metrics_key = ""
	I0318 13:18:22.910943 1141442 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0318 13:18:22.910952 1141442 command_runner.go:130] > [crio.tracing]
	I0318 13:18:22.910960 1141442 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0318 13:18:22.910966 1141442 command_runner.go:130] > # enable_tracing = false
	I0318 13:18:22.910978 1141442 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0318 13:18:22.910995 1141442 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0318 13:18:22.911009 1141442 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0318 13:18:22.911019 1141442 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0318 13:18:22.911030 1141442 command_runner.go:130] > # CRI-O NRI configuration.
	I0318 13:18:22.911037 1141442 command_runner.go:130] > [crio.nri]
	I0318 13:18:22.911042 1141442 command_runner.go:130] > # Globally enable or disable NRI.
	I0318 13:18:22.911050 1141442 command_runner.go:130] > # enable_nri = false
	I0318 13:18:22.911060 1141442 command_runner.go:130] > # NRI socket to listen on.
	I0318 13:18:22.911071 1141442 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0318 13:18:22.911078 1141442 command_runner.go:130] > # NRI plugin directory to use.
	I0318 13:18:22.911089 1141442 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0318 13:18:22.911100 1141442 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0318 13:18:22.911110 1141442 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0318 13:18:22.911122 1141442 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0318 13:18:22.911132 1141442 command_runner.go:130] > # nri_disable_connections = false
	I0318 13:18:22.911140 1141442 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0318 13:18:22.911149 1141442 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0318 13:18:22.911160 1141442 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0318 13:18:22.911171 1141442 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0318 13:18:22.911181 1141442 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0318 13:18:22.911191 1141442 command_runner.go:130] > [crio.stats]
	I0318 13:18:22.911203 1141442 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0318 13:18:22.911214 1141442 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0318 13:18:22.911224 1141442 command_runner.go:130] > # stats_collection_period = 0
	I0318 13:18:22.911266 1141442 command_runner.go:130] ! time="2024-03-18 13:18:22.872028976Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0318 13:18:22.911292 1141442 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0318 13:18:22.911429 1141442 cni.go:84] Creating CNI manager for ""
	I0318 13:18:22.911440 1141442 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:18:22.911449 1141442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:18:22.911477 1141442 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.156 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-229365 NodeName:multinode-229365 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:18:22.911638 1141442 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-229365"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.156"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:18:22.911710 1141442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:18:22.923606 1141442 command_runner.go:130] > kubeadm
	I0318 13:18:22.923620 1141442 command_runner.go:130] > kubectl
	I0318 13:18:22.923624 1141442 command_runner.go:130] > kubelet
	I0318 13:18:22.924089 1141442 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:18:22.924142 1141442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:18:22.934261 1141442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0318 13:18:22.952947 1141442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:18:22.975239 1141442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0318 13:18:22.996284 1141442 ssh_runner.go:195] Run: grep 192.168.39.156	control-plane.minikube.internal$ /etc/hosts
	I0318 13:18:23.000627 1141442 command_runner.go:130] > 192.168.39.156	control-plane.minikube.internal
	I0318 13:18:23.000854 1141442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:18:23.147702 1141442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:18:23.164928 1141442 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365 for IP: 192.168.39.156
	I0318 13:18:23.164952 1141442 certs.go:194] generating shared ca certs ...
	I0318 13:18:23.164968 1141442 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:18:23.165162 1141442 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:18:23.165216 1141442 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:18:23.165229 1141442 certs.go:256] generating profile certs ...
	I0318 13:18:23.165332 1141442 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/client.key
	I0318 13:18:23.165432 1141442 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/apiserver.key.963b5288
	I0318 13:18:23.165486 1141442 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/proxy-client.key
	I0318 13:18:23.165502 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:18:23.165519 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:18:23.165538 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:18:23.165555 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:18:23.165573 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:18:23.165589 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:18:23.165608 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:18:23.165627 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:18:23.165707 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:18:23.165749 1141442 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:18:23.165763 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:18:23.165794 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:18:23.165826 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:18:23.165867 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:18:23.165917 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:18:23.165957 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /usr/share/ca-certificates/11141362.pem
	I0318 13:18:23.165977 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:18:23.166004 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem -> /usr/share/ca-certificates/1114136.pem
	I0318 13:18:23.166669 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:18:23.192381 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:18:23.220040 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:18:23.246356 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:18:23.273294 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:18:23.299722 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:18:23.336303 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:18:23.363652 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:18:23.390593 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:18:23.418102 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:18:23.445242 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:18:23.472730 1141442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:18:23.491380 1141442 ssh_runner.go:195] Run: openssl version
	I0318 13:18:23.497983 1141442 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 13:18:23.498074 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:18:23.509687 1141442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:18:23.514864 1141442 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:18:23.514896 1141442 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:18:23.514938 1141442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:18:23.521217 1141442 command_runner.go:130] > 3ec20f2e
	I0318 13:18:23.521282 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:18:23.531306 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:18:23.543008 1141442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:18:23.548029 1141442 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:18:23.548119 1141442 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:18:23.548164 1141442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:18:23.554112 1141442 command_runner.go:130] > b5213941
	I0318 13:18:23.554283 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:18:23.565529 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:18:23.577276 1141442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:18:23.582139 1141442 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:18:23.582320 1141442 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:18:23.582370 1141442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:18:23.588600 1141442 command_runner.go:130] > 51391683
	I0318 13:18:23.588659 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:18:23.598604 1141442 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:18:23.603685 1141442 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:18:23.603707 1141442 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 13:18:23.603713 1141442 command_runner.go:130] > Device: 253,1	Inode: 8385597     Links: 1
	I0318 13:18:23.603720 1141442 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:18:23.603731 1141442 command_runner.go:130] > Access: 2024-03-18 13:11:52.858908190 +0000
	I0318 13:18:23.603743 1141442 command_runner.go:130] > Modify: 2024-03-18 13:11:52.858908190 +0000
	I0318 13:18:23.603752 1141442 command_runner.go:130] > Change: 2024-03-18 13:11:52.858908190 +0000
	I0318 13:18:23.603763 1141442 command_runner.go:130] >  Birth: 2024-03-18 13:11:52.858908190 +0000
	I0318 13:18:23.603808 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:18:23.610063 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.610110 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:18:23.616035 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.616223 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:18:23.622125 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.622275 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:18:23.628581 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.628782 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:18:23.634703 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.635047 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:18:23.640862 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.641101 1141442 kubeadm.go:391] StartCluster: {Name:multinode-229365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-229365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.34 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:18:23.641260 1141442 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:18:23.641310 1141442 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:18:23.680206 1141442 command_runner.go:130] > b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a
	I0318 13:18:23.680248 1141442 command_runner.go:130] > 983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab
	I0318 13:18:23.680257 1141442 command_runner.go:130] > 1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b
	I0318 13:18:23.680267 1141442 command_runner.go:130] > e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e
	I0318 13:18:23.680274 1141442 command_runner.go:130] > be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875
	I0318 13:18:23.680280 1141442 command_runner.go:130] > 2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536
	I0318 13:18:23.680284 1141442 command_runner.go:130] > 19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26
	I0318 13:18:23.680291 1141442 command_runner.go:130] > 07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1
	I0318 13:18:23.680319 1141442 cri.go:89] found id: "b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a"
	I0318 13:18:23.680341 1141442 cri.go:89] found id: "983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab"
	I0318 13:18:23.680347 1141442 cri.go:89] found id: "1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b"
	I0318 13:18:23.680353 1141442 cri.go:89] found id: "e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e"
	I0318 13:18:23.680361 1141442 cri.go:89] found id: "be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875"
	I0318 13:18:23.680365 1141442 cri.go:89] found id: "2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536"
	I0318 13:18:23.680368 1141442 cri.go:89] found id: "19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26"
	I0318 13:18:23.680371 1141442 cri.go:89] found id: "07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1"
	I0318 13:18:23.680373 1141442 cri.go:89] found id: ""
	I0318 13:18:23.680416 1141442 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.516252577Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767990516222841,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=034a19d4-3d84-41a0-831f-740daf706282 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.517225938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea91902c-926c-4c84-b146-83a90b67e34f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.517288879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea91902c-926c-4c84-b146-83a90b67e34f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.517615865Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16c2068522aadd44d069b85237c1ecc8b4aa99c5f143694257bb68071e7967b9,PodSandboxId:52bcdb9819669cf60e69440a744d314a8bc62735238358f2d62b59c9020a78d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767943364002170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79bb5b5fff7b50b0330cb014a1df22e84215a782ada7bc3fa6966a0c064f000,PodSandboxId:aa84ed1648c5e9b1bb11ded414f0f616785af4a62fa8b96211146138b7f6f385,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767909804414389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f87f713de60a9fb094193e213dd7121bd45eacef190e08fa49305b85efca22,PodSandboxId:3a6e62bea40f4ea24d7b82d069673eadf043c3f3b591d38308f2e459f1dbf1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767909726012946,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae20eb2a607f0ee926bf3b398451e16c7fa85914d2376affeb61811e29d664e9,PodSandboxId:a78d906e90628ab24920dce8b71512eaa1d236a238f3524aa3485e08904fa4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767909601462018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},A
nnotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151dba6a079a7fdc667d950d8f470b7279ae09ee891e6b0aa1c49a5bfd50ad51,PodSandboxId:fefe9ea6b3d56f26b9ae0847168a70c2da186316d7f72cf42c0161ba4a77be38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767909552436280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.k
ubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84cd5e0c1d2fd1fecc58b212fc8ad4cfbcc77bc38679000d8a6752bc84f8db10,PodSandboxId:dbe69e40c879cd2c1e4d24f5cd826a5c498e0c68a30660925eb4e8eef2374cbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767905920326537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c622392,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dceda195e3c4ef42a31a2a4418cb8dd7c5f9c0198cac9073992229ffde86404,PodSandboxId:66a24839549dc3421257e5b9f62ae1bc589abffe5a395f824ef9eeee21331c32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767905717294793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909de68b9ffeb63cdc62969865653b738c51939e0f04763d3dc44ecac47ca541,PodSandboxId:860e20b56d1446198f88c8ea833b4f21133e32a9b08861754033cb7e0ede6ba2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767905744930362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc67fccd3fe04505303c6c10812d6cf0de31a255788d0152635133cf2e7b60d,PodSandboxId:1610ef2a2e41b3b61bb2b6ce7dd86964c4ce2ff3bc438c4c0a9ae15bab1c833c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767905703402118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422b25901a04956d84fb11b8f915766e21ccd6584b13e2c464d9de34b34be634,PodSandboxId:bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767590513172477,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a,PodSandboxId:54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767543166488184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab,PodSandboxId:2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767541637434678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},Annotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b,PodSandboxId:f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710767539787396490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e,PodSandboxId:f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710767535975994605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875,PodSandboxId:74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710767516231094554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536,PodSandboxId:22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710767516219506119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35
a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26,PodSandboxId:c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710767516192368029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c62239
2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1,PodSandboxId:c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710767516081144223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations
:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea91902c-926c-4c84-b146-83a90b67e34f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.570297841Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=57457f07-3150-4f92-b722-75f72dd89dcc name=/runtime.v1.RuntimeService/Version
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.570370176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57457f07-3150-4f92-b722-75f72dd89dcc name=/runtime.v1.RuntimeService/Version
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.571885970Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8a809e37-61f7-4d7b-99a5-08def623a50d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.572305389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767990572283532,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8a809e37-61f7-4d7b-99a5-08def623a50d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.572870450Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe085b84-877e-4fd1-899f-f9ef0ebab9b1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.572957265Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe085b84-877e-4fd1-899f-f9ef0ebab9b1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.573875914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16c2068522aadd44d069b85237c1ecc8b4aa99c5f143694257bb68071e7967b9,PodSandboxId:52bcdb9819669cf60e69440a744d314a8bc62735238358f2d62b59c9020a78d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767943364002170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79bb5b5fff7b50b0330cb014a1df22e84215a782ada7bc3fa6966a0c064f000,PodSandboxId:aa84ed1648c5e9b1bb11ded414f0f616785af4a62fa8b96211146138b7f6f385,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767909804414389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f87f713de60a9fb094193e213dd7121bd45eacef190e08fa49305b85efca22,PodSandboxId:3a6e62bea40f4ea24d7b82d069673eadf043c3f3b591d38308f2e459f1dbf1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767909726012946,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae20eb2a607f0ee926bf3b398451e16c7fa85914d2376affeb61811e29d664e9,PodSandboxId:a78d906e90628ab24920dce8b71512eaa1d236a238f3524aa3485e08904fa4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767909601462018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},A
nnotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151dba6a079a7fdc667d950d8f470b7279ae09ee891e6b0aa1c49a5bfd50ad51,PodSandboxId:fefe9ea6b3d56f26b9ae0847168a70c2da186316d7f72cf42c0161ba4a77be38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767909552436280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.k
ubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84cd5e0c1d2fd1fecc58b212fc8ad4cfbcc77bc38679000d8a6752bc84f8db10,PodSandboxId:dbe69e40c879cd2c1e4d24f5cd826a5c498e0c68a30660925eb4e8eef2374cbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767905920326537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c622392,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dceda195e3c4ef42a31a2a4418cb8dd7c5f9c0198cac9073992229ffde86404,PodSandboxId:66a24839549dc3421257e5b9f62ae1bc589abffe5a395f824ef9eeee21331c32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767905717294793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909de68b9ffeb63cdc62969865653b738c51939e0f04763d3dc44ecac47ca541,PodSandboxId:860e20b56d1446198f88c8ea833b4f21133e32a9b08861754033cb7e0ede6ba2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767905744930362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc67fccd3fe04505303c6c10812d6cf0de31a255788d0152635133cf2e7b60d,PodSandboxId:1610ef2a2e41b3b61bb2b6ce7dd86964c4ce2ff3bc438c4c0a9ae15bab1c833c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767905703402118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422b25901a04956d84fb11b8f915766e21ccd6584b13e2c464d9de34b34be634,PodSandboxId:bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767590513172477,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a,PodSandboxId:54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767543166488184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab,PodSandboxId:2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767541637434678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},Annotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b,PodSandboxId:f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710767539787396490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e,PodSandboxId:f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710767535975994605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875,PodSandboxId:74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710767516231094554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536,PodSandboxId:22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710767516219506119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35
a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26,PodSandboxId:c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710767516192368029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c62239
2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1,PodSandboxId:c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710767516081144223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations
:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe085b84-877e-4fd1-899f-f9ef0ebab9b1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.622765731Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=65dacd78-c7e8-4fa2-8142-53cce3462aa1 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.622903600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=65dacd78-c7e8-4fa2-8142-53cce3462aa1 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.624014664Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9e117ea8-ecb6-4431-a3d7-bb8019bb9676 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.624404878Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767990624384679,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9e117ea8-ecb6-4431-a3d7-bb8019bb9676 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.624930505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e09eb837-4b30-4ee1-9bc7-009cf2d6058f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.625019783Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e09eb837-4b30-4ee1-9bc7-009cf2d6058f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.625355702Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16c2068522aadd44d069b85237c1ecc8b4aa99c5f143694257bb68071e7967b9,PodSandboxId:52bcdb9819669cf60e69440a744d314a8bc62735238358f2d62b59c9020a78d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767943364002170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79bb5b5fff7b50b0330cb014a1df22e84215a782ada7bc3fa6966a0c064f000,PodSandboxId:aa84ed1648c5e9b1bb11ded414f0f616785af4a62fa8b96211146138b7f6f385,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767909804414389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f87f713de60a9fb094193e213dd7121bd45eacef190e08fa49305b85efca22,PodSandboxId:3a6e62bea40f4ea24d7b82d069673eadf043c3f3b591d38308f2e459f1dbf1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767909726012946,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae20eb2a607f0ee926bf3b398451e16c7fa85914d2376affeb61811e29d664e9,PodSandboxId:a78d906e90628ab24920dce8b71512eaa1d236a238f3524aa3485e08904fa4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767909601462018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},A
nnotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151dba6a079a7fdc667d950d8f470b7279ae09ee891e6b0aa1c49a5bfd50ad51,PodSandboxId:fefe9ea6b3d56f26b9ae0847168a70c2da186316d7f72cf42c0161ba4a77be38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767909552436280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.k
ubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84cd5e0c1d2fd1fecc58b212fc8ad4cfbcc77bc38679000d8a6752bc84f8db10,PodSandboxId:dbe69e40c879cd2c1e4d24f5cd826a5c498e0c68a30660925eb4e8eef2374cbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767905920326537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c622392,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dceda195e3c4ef42a31a2a4418cb8dd7c5f9c0198cac9073992229ffde86404,PodSandboxId:66a24839549dc3421257e5b9f62ae1bc589abffe5a395f824ef9eeee21331c32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767905717294793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909de68b9ffeb63cdc62969865653b738c51939e0f04763d3dc44ecac47ca541,PodSandboxId:860e20b56d1446198f88c8ea833b4f21133e32a9b08861754033cb7e0ede6ba2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767905744930362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc67fccd3fe04505303c6c10812d6cf0de31a255788d0152635133cf2e7b60d,PodSandboxId:1610ef2a2e41b3b61bb2b6ce7dd86964c4ce2ff3bc438c4c0a9ae15bab1c833c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767905703402118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422b25901a04956d84fb11b8f915766e21ccd6584b13e2c464d9de34b34be634,PodSandboxId:bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767590513172477,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a,PodSandboxId:54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767543166488184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab,PodSandboxId:2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767541637434678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},Annotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b,PodSandboxId:f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710767539787396490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e,PodSandboxId:f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710767535975994605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875,PodSandboxId:74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710767516231094554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536,PodSandboxId:22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710767516219506119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35
a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26,PodSandboxId:c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710767516192368029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c62239
2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1,PodSandboxId:c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710767516081144223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations
:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e09eb837-4b30-4ee1-9bc7-009cf2d6058f name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.678927328Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60cbbeb6-d3ac-4994-b843-6ba4e37aa0ce name=/runtime.v1.RuntimeService/Version
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.679038069Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60cbbeb6-d3ac-4994-b843-6ba4e37aa0ce name=/runtime.v1.RuntimeService/Version
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.680417236Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=758c9bc3-904e-4c89-9e41-199bc3df3c72 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.681061140Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710767990681035374,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=758c9bc3-904e-4c89-9e41-199bc3df3c72 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.682175149Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9340a3f1-ec4c-4e9d-afd8-975fbb8e1b8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.682519451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9340a3f1-ec4c-4e9d-afd8-975fbb8e1b8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:19:50 multinode-229365 crio[2872]: time="2024-03-18 13:19:50.683735416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16c2068522aadd44d069b85237c1ecc8b4aa99c5f143694257bb68071e7967b9,PodSandboxId:52bcdb9819669cf60e69440a744d314a8bc62735238358f2d62b59c9020a78d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767943364002170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79bb5b5fff7b50b0330cb014a1df22e84215a782ada7bc3fa6966a0c064f000,PodSandboxId:aa84ed1648c5e9b1bb11ded414f0f616785af4a62fa8b96211146138b7f6f385,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767909804414389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f87f713de60a9fb094193e213dd7121bd45eacef190e08fa49305b85efca22,PodSandboxId:3a6e62bea40f4ea24d7b82d069673eadf043c3f3b591d38308f2e459f1dbf1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767909726012946,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae20eb2a607f0ee926bf3b398451e16c7fa85914d2376affeb61811e29d664e9,PodSandboxId:a78d906e90628ab24920dce8b71512eaa1d236a238f3524aa3485e08904fa4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767909601462018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},A
nnotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151dba6a079a7fdc667d950d8f470b7279ae09ee891e6b0aa1c49a5bfd50ad51,PodSandboxId:fefe9ea6b3d56f26b9ae0847168a70c2da186316d7f72cf42c0161ba4a77be38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767909552436280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.k
ubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84cd5e0c1d2fd1fecc58b212fc8ad4cfbcc77bc38679000d8a6752bc84f8db10,PodSandboxId:dbe69e40c879cd2c1e4d24f5cd826a5c498e0c68a30660925eb4e8eef2374cbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767905920326537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c622392,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dceda195e3c4ef42a31a2a4418cb8dd7c5f9c0198cac9073992229ffde86404,PodSandboxId:66a24839549dc3421257e5b9f62ae1bc589abffe5a395f824ef9eeee21331c32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767905717294793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909de68b9ffeb63cdc62969865653b738c51939e0f04763d3dc44ecac47ca541,PodSandboxId:860e20b56d1446198f88c8ea833b4f21133e32a9b08861754033cb7e0ede6ba2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767905744930362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc67fccd3fe04505303c6c10812d6cf0de31a255788d0152635133cf2e7b60d,PodSandboxId:1610ef2a2e41b3b61bb2b6ce7dd86964c4ce2ff3bc438c4c0a9ae15bab1c833c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767905703402118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422b25901a04956d84fb11b8f915766e21ccd6584b13e2c464d9de34b34be634,PodSandboxId:bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767590513172477,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a,PodSandboxId:54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767543166488184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab,PodSandboxId:2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767541637434678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},Annotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b,PodSandboxId:f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710767539787396490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e,PodSandboxId:f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710767535975994605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875,PodSandboxId:74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710767516231094554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536,PodSandboxId:22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710767516219506119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35
a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26,PodSandboxId:c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710767516192368029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c62239
2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1,PodSandboxId:c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710767516081144223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations
:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9340a3f1-ec4c-4e9d-afd8-975fbb8e1b8c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	16c2068522aad       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      47 seconds ago       Running             busybox                   1                   52bcdb9819669       busybox-5b5d89c9d6-cc5z6
	b79bb5b5fff7b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   aa84ed1648c5e       kindnet-xcffd
	a7f87f713de60       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   1                   3a6e62bea40f4       coredns-5dd5756b68-c6dnv
	ae20eb2a607f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   a78d906e90628       storage-provisioner
	151dba6a079a7       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                1                   fefe9ea6b3d56       kube-proxy-vdnsn
	84cd5e0c1d2fd       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            1                   dbe69e40c879c       kube-scheduler-multinode-229365
	909de68b9ffeb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      1                   860e20b56d144       etcd-multinode-229365
	2dceda195e3c4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   1                   66a24839549dc       kube-controller-manager-multinode-229365
	8dc67fccd3fe0       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            1                   1610ef2a2e41b       kube-apiserver-multinode-229365
	422b25901a049       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   bcf600d198b03       busybox-5b5d89c9d6-cc5z6
	b7d6113ae413d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago        Exited              coredns                   0                   54beb227a8ffc       coredns-5dd5756b68-c6dnv
	983530ca0390f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   2b91079d0b985       storage-provisioner
	1ed0bf4243c86       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago        Exited              kindnet-cni               0                   f26b8c9a42276       kindnet-xcffd
	e592bd1c5e3a4       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago        Exited              kube-proxy                0                   f1816dbdb1063       kube-proxy-vdnsn
	be9f7f6d2ab2c       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago        Exited              kube-controller-manager   0                   74a85ae0bbe5a       kube-controller-manager-multinode-229365
	2f3ec2a2b2ec3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago        Exited              etcd                      0                   22b007523ac25       etcd-multinode-229365
	19006263151a3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago        Exited              kube-scheduler            0                   c1cafd55af3f2       kube-scheduler-multinode-229365
	07436321da95f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago        Exited              kube-apiserver            0                   c9f7b9a897712       kube-apiserver-multinode-229365
	
	
	==> coredns [a7f87f713de60a9fb094193e213dd7121bd45eacef190e08fa49305b85efca22] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39283 - 1170 "HINFO IN 8233788883777066174.2102812111472599324. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015882894s
	
	
	==> coredns [b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a] <==
	[INFO] 10.244.1.2:47054 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001851632s
	[INFO] 10.244.1.2:43447 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000151811s
	[INFO] 10.244.1.2:48611 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105634s
	[INFO] 10.244.1.2:53643 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001681384s
	[INFO] 10.244.1.2:58877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140424s
	[INFO] 10.244.1.2:46078 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123455s
	[INFO] 10.244.1.2:35238 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013228s
	[INFO] 10.244.0.3:35340 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102391s
	[INFO] 10.244.0.3:46548 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010052s
	[INFO] 10.244.0.3:41780 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072991s
	[INFO] 10.244.0.3:35378 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000270011s
	[INFO] 10.244.1.2:52811 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175213s
	[INFO] 10.244.1.2:50087 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015151s
	[INFO] 10.244.1.2:48793 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169883s
	[INFO] 10.244.1.2:45004 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074448s
	[INFO] 10.244.0.3:51339 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114263s
	[INFO] 10.244.0.3:37530 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129803s
	[INFO] 10.244.0.3:49083 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133048s
	[INFO] 10.244.0.3:36434 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093106s
	[INFO] 10.244.1.2:42947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117709s
	[INFO] 10.244.1.2:52006 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097564s
	[INFO] 10.244.1.2:49270 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094592s
	[INFO] 10.244.1.2:44667 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000247057s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-229365
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-229365
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=multinode-229365
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_12_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:11:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-229365
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:19:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:18:28 +0000   Mon, 18 Mar 2024 13:11:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:18:28 +0000   Mon, 18 Mar 2024 13:11:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:18:28 +0000   Mon, 18 Mar 2024 13:11:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:18:28 +0000   Mon, 18 Mar 2024 13:12:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    multinode-229365
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 14c11cba73134b11abfeb410fbee10f1
	  System UUID:                14c11cba-7313-4b11-abfe-b410fbee10f1
	  Boot ID:                    6c85392c-0c28-4837-8562-81688e187c36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-cc5z6                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m44s
	  kube-system                 coredns-5dd5756b68-c6dnv                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m37s
	  kube-system                 etcd-multinode-229365                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m49s
	  kube-system                 kindnet-xcffd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m36s
	  kube-system                 kube-apiserver-multinode-229365             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-controller-manager-multinode-229365    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 kube-proxy-vdnsn                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m36s
	  kube-system                 kube-scheduler-multinode-229365             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m49s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m34s              kube-proxy       
	  Normal  Starting                 81s                kube-proxy       
	  Normal  NodeAllocatableEnforced  7m49s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m49s              kubelet          Node multinode-229365 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m49s              kubelet          Node multinode-229365 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m49s              kubelet          Node multinode-229365 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m49s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m37s              node-controller  Node multinode-229365 event: Registered Node multinode-229365 in Controller
	  Normal  NodeReady                7m30s              kubelet          Node multinode-229365 status is now: NodeReady
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  86s (x8 over 86s)  kubelet          Node multinode-229365 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s (x8 over 86s)  kubelet          Node multinode-229365 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x7 over 86s)  kubelet          Node multinode-229365 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  86s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           70s                node-controller  Node multinode-229365 event: Registered Node multinode-229365 in Controller
	
	
	Name:               multinode-229365-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-229365-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=multinode-229365
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_19_09_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:19:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-229365-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:19:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:19:39 +0000   Mon, 18 Mar 2024 13:19:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:19:39 +0000   Mon, 18 Mar 2024 13:19:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:19:39 +0000   Mon, 18 Mar 2024 13:19:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:19:39 +0000   Mon, 18 Mar 2024 13:19:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    multinode-229365-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 cebf78e8a3c8406ea493310af8f889fb
	  System UUID:                cebf78e8-a3c8-406e-a493-310af8f889fb
	  Boot ID:                    e8100713-4525-4820-8b50-52b8c858acd6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-q6bt8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-jmf7p               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m55s
	  kube-system                 kube-proxy-ll5m7            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 6m51s                  kube-proxy  
	  Normal  Starting                 39s                    kube-proxy  
	  Normal  NodeHasSufficientMemory  6m55s (x5 over 6m57s)  kubelet     Node multinode-229365-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m55s (x5 over 6m57s)  kubelet     Node multinode-229365-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m55s (x5 over 6m57s)  kubelet     Node multinode-229365-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m47s                  kubelet     Node multinode-229365-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  43s (x5 over 44s)      kubelet     Node multinode-229365-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x5 over 44s)      kubelet     Node multinode-229365-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x5 over 44s)      kubelet     Node multinode-229365-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                35s                    kubelet     Node multinode-229365-m02 status is now: NodeReady
	
	
	Name:               multinode-229365-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-229365-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=multinode-229365
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_19_40_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:19:39 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-229365-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:19:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:19:47 +0000   Mon, 18 Mar 2024 13:19:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:19:47 +0000   Mon, 18 Mar 2024 13:19:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:19:47 +0000   Mon, 18 Mar 2024 13:19:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:19:47 +0000   Mon, 18 Mar 2024 13:19:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.34
	  Hostname:    multinode-229365-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4161eef7af546c98f748431d9ce5f01
	  System UUID:                c4161eef-7af5-46c9-8f74-8431d9ce5f01
	  Boot ID:                    1917998c-8898-4749-a959-77cf4eb21d01
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-w5prk       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m4s
	  kube-system                 kube-proxy-kcrqn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m12s                  kube-proxy  
	  Normal  Starting                 5m58s                  kube-proxy  
	  Normal  Starting                 8s                     kube-proxy  
	  Normal  NodeHasNoDiskPressure    6m4s (x5 over 6m6s)    kubelet     Node multinode-229365-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x5 over 6m6s)    kubelet     Node multinode-229365-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m4s (x5 over 6m6s)    kubelet     Node multinode-229365-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m54s                  kubelet     Node multinode-229365-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m19s (x5 over 5m20s)  kubelet     Node multinode-229365-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m19s (x5 over 5m20s)  kubelet     Node multinode-229365-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m19s (x5 over 5m20s)  kubelet     Node multinode-229365-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m8s                   kubelet     Node multinode-229365-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  12s (x5 over 13s)      kubelet     Node multinode-229365-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x5 over 13s)      kubelet     Node multinode-229365-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x5 over 13s)      kubelet     Node multinode-229365-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4s                     kubelet     Node multinode-229365-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.065342] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.206634] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.143052] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.264670] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +5.316178] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.061646] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.062123] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +1.103138] kauditd_printk_skb: 57 callbacks suppressed
	[Mar18 13:12] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.086304] kauditd_printk_skb: 30 callbacks suppressed
	[ +12.756200] systemd-fstab-generator[1459]: Ignoring "noauto" option for root device
	[  +0.127187] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.404043] kauditd_printk_skb: 56 callbacks suppressed
	[Mar18 13:13] kauditd_printk_skb: 18 callbacks suppressed
	[Mar18 13:18] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.174834] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.180142] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.142220] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +0.265692] systemd-fstab-generator[2856]: Ignoring "noauto" option for root device
	[  +0.781929] systemd-fstab-generator[2957]: Ignoring "noauto" option for root device
	[  +1.712960] systemd-fstab-generator[3079]: Ignoring "noauto" option for root device
	[  +4.646688] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.039916] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.755945] systemd-fstab-generator[3892]: Ignoring "noauto" option for root device
	[Mar18 13:19] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536] <==
	{"level":"warn","ts":"2024-03-18T13:13:45.999888Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:13:45.673255Z","time spent":"326.499636ms","remote":"127.0.0.1:55712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1318,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-p4tb6\" mod_revision:569 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" value_size:1264 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" > >"}
	{"level":"info","ts":"2024-03-18T13:13:46.000129Z","caller":"traceutil/trace.go:171","msg":"trace[1483217109] linearizableReadLoop","detail":"{readStateIndex:603; appliedIndex:602; }","duration":"325.991506ms","start":"2024-03-18T13:13:45.674118Z","end":"2024-03-18T13:13:46.00011Z","steps":["trace[1483217109] 'read index received'  (duration: 116.88235ms)","trace[1483217109] 'applied index is now lower than readState.Index'  (duration: 209.108013ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T13:13:46.000194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.086657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-p4tb6\" ","response":"range_response_count:1 size:1333"}
	{"level":"info","ts":"2024-03-18T13:13:46.000212Z","caller":"traceutil/trace.go:171","msg":"trace[2145937851] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-p4tb6; range_end:; response_count:1; response_revision:570; }","duration":"326.120015ms","start":"2024-03-18T13:13:45.674084Z","end":"2024-03-18T13:13:46.000204Z","steps":["trace[2145937851] 'agreement among raft nodes before linearized reading'  (duration: 326.069376ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:13:46.000234Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:13:45.674069Z","time spent":"326.159518ms","remote":"127.0.0.1:55712","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":1356,"request content":"key:\"/registry/certificatesigningrequests/csr-p4tb6\" "}
	{"level":"warn","ts":"2024-03-18T13:13:46.365631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.375904ms","expected-duration":"100ms","prefix":"","request":"header:<ID:646985977988628741 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-p4tb6\" mod_revision:570 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" value_size:2297 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-18T13:13:46.365751Z","caller":"traceutil/trace.go:171","msg":"trace[1007431203] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:604; }","duration":"232.004809ms","start":"2024-03-18T13:13:46.133736Z","end":"2024-03-18T13:13:46.365741Z","steps":["trace[1007431203] 'read index received'  (duration: 22.314723ms)","trace[1007431203] 'applied index is now lower than readState.Index'  (duration: 209.688969ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T13:13:46.365908Z","caller":"traceutil/trace.go:171","msg":"trace[1024785790] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"356.238544ms","start":"2024-03-18T13:13:46.009661Z","end":"2024-03-18T13:13:46.365899Z","steps":["trace[1024785790] 'process raft request'  (duration: 146.546257ms)","trace[1024785790] 'compare'  (duration: 209.042824ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T13:13:46.365981Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:13:46.009637Z","time spent":"356.310619ms","remote":"127.0.0.1:55712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2351,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-p4tb6\" mod_revision:570 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" value_size:2297 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" > >"}
	{"level":"warn","ts":"2024-03-18T13:13:46.366021Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.141342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T13:13:46.366094Z","caller":"traceutil/trace.go:171","msg":"trace[1209698970] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:571; }","duration":"231.219268ms","start":"2024-03-18T13:13:46.134866Z","end":"2024-03-18T13:13:46.366085Z","steps":["trace[1209698970] 'agreement among raft nodes before linearized reading'  (duration: 231.114262ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:13:46.36623Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.555484ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-18T13:13:46.366278Z","caller":"traceutil/trace.go:171","msg":"trace[550660470] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:571; }","duration":"232.606295ms","start":"2024-03-18T13:13:46.133665Z","end":"2024-03-18T13:13:46.366272Z","steps":["trace[550660470] 'agreement among raft nodes before linearized reading'  (duration: 232.5385ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:14:02.626858Z","caller":"traceutil/trace.go:171","msg":"trace[1187718306] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"119.572765ms","start":"2024-03-18T13:14:02.507209Z","end":"2024-03-18T13:14:02.626782Z","steps":["trace[1187718306] 'process raft request'  (duration: 119.410239ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:16:49.969402Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-18T13:16:49.969618Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-229365","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.156:2380"],"advertise-client-urls":["https://192.168.39.156:2379"]}
	{"level":"warn","ts":"2024-03-18T13:16:49.969782Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:16:49.969972Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	WARNING: 2024/03/18 13:16:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T13:16:50.050911Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.156:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:16:50.050976Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.156:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T13:16:50.051038Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"45ea9d8f303c08fa","current-leader-member-id":"45ea9d8f303c08fa"}
	{"level":"info","ts":"2024-03-18T13:16:50.053637Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.156:2380"}
	{"level":"info","ts":"2024-03-18T13:16:50.053848Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.156:2380"}
	{"level":"info","ts":"2024-03-18T13:16:50.053915Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-229365","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.156:2380"],"advertise-client-urls":["https://192.168.39.156:2379"]}
	
	
	==> etcd [909de68b9ffeb63cdc62969865653b738c51939e0f04763d3dc44ecac47ca541] <==
	{"level":"info","ts":"2024-03-18T13:18:26.129621Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T13:18:26.129771Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-18T13:18:26.148426Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:18:26.157199Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:18:26.157222Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:18:26.157511Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.156:2380"}
	{"level":"info","ts":"2024-03-18T13:18:26.157554Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.156:2380"}
	{"level":"info","ts":"2024-03-18T13:18:26.162191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa switched to configuration voters=(5038012371482446074)"}
	{"level":"info","ts":"2024-03-18T13:18:26.167296Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d1f5bcbb1e4f2572","local-member-id":"45ea9d8f303c08fa","added-peer-id":"45ea9d8f303c08fa","added-peer-peer-urls":["https://192.168.39.156:2380"]}
	{"level":"info","ts":"2024-03-18T13:18:26.16747Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d1f5bcbb1e4f2572","local-member-id":"45ea9d8f303c08fa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:18:26.167534Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:18:27.182968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T13:18:27.183048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:18:27.183085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa received MsgPreVoteResp from 45ea9d8f303c08fa at term 2"}
	{"level":"info","ts":"2024-03-18T13:18:27.183098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T13:18:27.183103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa received MsgVoteResp from 45ea9d8f303c08fa at term 3"}
	{"level":"info","ts":"2024-03-18T13:18:27.183113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa became leader at term 3"}
	{"level":"info","ts":"2024-03-18T13:18:27.18319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 45ea9d8f303c08fa elected leader 45ea9d8f303c08fa at term 3"}
	{"level":"info","ts":"2024-03-18T13:18:27.185877Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"45ea9d8f303c08fa","local-member-attributes":"{Name:multinode-229365 ClientURLs:[https://192.168.39.156:2379]}","request-path":"/0/members/45ea9d8f303c08fa/attributes","cluster-id":"d1f5bcbb1e4f2572","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:18:27.186044Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:18:27.186204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:18:27.187473Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:18:27.187656Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:18:27.187698Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T13:18:27.220665Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.156:2379"}
	
	
	==> kernel <==
	 13:19:51 up 8 min,  0 users,  load average: 0.58, 0.39, 0.20
	Linux multinode-229365 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b] <==
	I0318 13:16:00.892681       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	I0318 13:16:10.901593       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:16:10.901883       1 main.go:227] handling current node
	I0318 13:16:10.901956       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:16:10.901992       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:16:10.902136       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:16:10.902168       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	I0318 13:16:20.915610       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:16:20.915660       1 main.go:227] handling current node
	I0318 13:16:20.915670       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:16:20.915676       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:16:20.915870       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:16:20.915878       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	I0318 13:16:30.923875       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:16:30.923927       1 main.go:227] handling current node
	I0318 13:16:30.923937       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:16:30.923943       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:16:30.924134       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:16:30.924168       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	I0318 13:16:40.941447       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:16:40.941507       1 main.go:227] handling current node
	I0318 13:16:40.941522       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:16:40.941529       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:16:40.941672       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:16:40.941703       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b79bb5b5fff7b50b0330cb014a1df22e84215a782ada7bc3fa6966a0c064f000] <==
	I0318 13:19:10.613591       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	I0318 13:19:20.627540       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:19:20.627604       1 main.go:227] handling current node
	I0318 13:19:20.627711       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:19:20.627892       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:19:20.628283       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:19:20.628318       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	I0318 13:19:30.635708       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:19:30.635947       1 main.go:227] handling current node
	I0318 13:19:30.635987       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:19:30.636010       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:19:30.636177       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:19:30.636199       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	I0318 13:19:40.656379       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:19:40.656648       1 main.go:227] handling current node
	I0318 13:19:40.656664       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:19:40.656670       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:19:40.657028       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:19:40.657071       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.2.0/24] 
	I0318 13:19:50.663841       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:19:50.663894       1 main.go:227] handling current node
	I0318 13:19:50.663919       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:19:50.663925       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:19:50.664034       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:19:50.664068       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1] <==
	I0318 13:12:00.647409       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:12:00.687928       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:12:00.755528       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0318 13:12:00.763250       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.156]
	I0318 13:12:00.764271       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:12:00.768734       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:12:01.165684       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:12:02.372547       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:12:02.389966       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0318 13:12:02.409374       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:12:14.697602       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0318 13:12:15.162368       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0318 13:16:49.959846       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0318 13:16:49.990066       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 13:16:49.990169       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 13:16:49.990236       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0318 13:16:49.997033       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:49.997551       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:49.998012       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.007442       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.007898       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.008042       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.008746       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.009434       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.009538       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8dc67fccd3fe04505303c6c10812d6cf0de31a255788d0152635133cf2e7b60d] <==
	I0318 13:18:28.577692       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 13:18:28.662454       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:18:28.662564       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:18:28.698380       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:18:28.753735       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:18:28.753904       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:18:28.757195       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:18:28.757517       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:18:28.757558       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:18:28.757565       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:18:28.757570       1 cache.go:39] Caches are synced for autoregister controller
	E0318 13:18:28.762093       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0318 13:18:28.765429       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:18:28.766757       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:18:28.766845       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:18:28.777632       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:18:28.777783       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:18:29.582705       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:18:31.429282       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:18:31.557111       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:18:31.566323       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:18:31.642313       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:18:31.654675       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:18:41.407519       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:18:41.411989       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2dceda195e3c4ef42a31a2a4418cb8dd7c5f9c0198cac9073992229ffde86404] <==
	I0318 13:19:03.233055       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66.066518ms"
	I0318 13:19:03.242890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="9.797401ms"
	I0318 13:19:03.265191       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.265072ms"
	I0318 13:19:03.265298       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.893µs"
	I0318 13:19:08.939409       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229365-m02\" does not exist"
	I0318 13:19:08.941320       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-pjdnm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-pjdnm"
	I0318 13:19:08.952076       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-229365-m02" podCIDRs=["10.244.1.0/24"]
	I0318 13:19:09.439071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66.605µs"
	I0318 13:19:09.451554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="65.106µs"
	I0318 13:19:09.465114       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="70.375µs"
	I0318 13:19:09.498009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="211.065µs"
	I0318 13:19:09.507933       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="166.388µs"
	I0318 13:19:09.512318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="45.377µs"
	I0318 13:19:11.904865       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="78.001µs"
	I0318 13:19:16.641095       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:19:16.658704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="38.499µs"
	I0318 13:19:16.674567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="51.643µs"
	I0318 13:19:19.626379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="6.368823ms"
	I0318 13:19:19.627020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.574µs"
	I0318 13:19:21.365652       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-q6bt8" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-q6bt8"
	I0318 13:19:37.234157       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:19:39.751533       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229365-m03\" does not exist"
	I0318 13:19:39.753993       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:19:39.768683       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-229365-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:19:47.428294       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	
	
	==> kube-controller-manager [be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875] <==
	I0318 13:13:11.705540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="6.22142ms"
	I0318 13:13:11.707234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="265.163µs"
	I0318 13:13:47.203542       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:13:47.204730       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229365-m03\" does not exist"
	I0318 13:13:47.233563       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w5prk"
	I0318 13:13:47.237262       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kcrqn"
	I0318 13:13:47.240984       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-229365-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:13:49.815456       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-229365-m03"
	I0318 13:13:49.815709       1 event.go:307] "Event occurred" object="multinode-229365-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-229365-m03 event: Registered Node multinode-229365-m03 in Controller"
	I0318 13:13:57.742907       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:14:30.096076       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:14:32.885602       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229365-m03\" does not exist"
	I0318 13:14:32.886947       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:14:32.907741       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-229365-m03" podCIDRs=["10.244.3.0/24"]
	I0318 13:14:43.322499       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:15:24.875440       1 event.go:307] "Event occurred" object="multinode-229365-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-229365-m03 status is now: NodeNotReady"
	I0318 13:15:24.876502       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:15:24.892477       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-kcrqn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:15:24.909055       1 event.go:307] "Event occurred" object="kube-system/kindnet-w5prk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:15:29.921328       1 event.go:307] "Event occurred" object="multinode-229365-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-229365-m02 status is now: NodeNotReady"
	I0318 13:15:29.939086       1 event.go:307] "Event occurred" object="kube-system/kindnet-jmf7p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:15:29.963725       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ll5m7" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:15:29.976875       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-pjdnm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:15:29.983573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.126664ms"
	I0318 13:15:29.984170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="137.633µs"
	
	
	==> kube-proxy [151dba6a079a7fdc667d950d8f470b7279ae09ee891e6b0aa1c49a5bfd50ad51] <==
	I0318 13:18:29.898944       1 server_others.go:69] "Using iptables proxy"
	I0318 13:18:29.940379       1 node.go:141] Successfully retrieved node IP: 192.168.39.156
	I0318 13:18:30.022457       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:18:30.022599       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:18:30.026852       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:18:30.027144       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:18:30.027555       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:18:30.027984       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:18:30.029583       1 config.go:188] "Starting service config controller"
	I0318 13:18:30.029878       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:18:30.030409       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:18:30.030522       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:18:30.032499       1 config.go:315] "Starting node config controller"
	I0318 13:18:30.033909       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:18:30.131017       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:18:30.131095       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:18:30.135307       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e] <==
	I0318 13:12:16.505084       1 server_others.go:69] "Using iptables proxy"
	I0318 13:12:16.531106       1 node.go:141] Successfully retrieved node IP: 192.168.39.156
	I0318 13:12:16.585366       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:12:16.585437       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:12:16.588343       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:12:16.589205       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:12:16.589478       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:12:16.589515       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:12:16.591399       1 config.go:188] "Starting service config controller"
	I0318 13:12:16.592167       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:12:16.592264       1 config.go:315] "Starting node config controller"
	I0318 13:12:16.592302       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:12:16.593136       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:12:16.597060       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:12:16.693392       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:12:16.693414       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:12:16.697716       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26] <==
	E0318 13:11:59.189920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 13:11:59.187211       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:11:59.190220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:12:00.035127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:12:00.035328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:12:00.043734       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:12:00.043902       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:12:00.270346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:12:00.270724       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:12:00.296754       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:12:00.296906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:12:00.325967       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 13:12:00.325992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 13:12:00.339157       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:12:00.339346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:12:00.351777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 13:12:00.352312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 13:12:00.352189       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:12:00.352545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:12:00.365021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:12:00.365080       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:12:02.178668       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:16:49.977285       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:16:49.983085       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0318 13:16:49.983513       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [84cd5e0c1d2fd1fecc58b212fc8ad4cfbcc77bc38679000d8a6752bc84f8db10] <==
	I0318 13:18:26.889385       1 serving.go:348] Generated self-signed cert in-memory
	W0318 13:18:28.654433       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 13:18:28.654519       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:18:28.654549       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 13:18:28.654573       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:18:28.705286       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:18:28.705333       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:18:28.707005       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:18:28.707146       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:18:28.707526       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:18:28.707629       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:18:28.807901       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 13:18:28 multinode-229365 kubelet[3086]: I0318 13:18:28.991179    3086 topology_manager.go:215] "Topology Admit Handler" podUID="6e762a2b-2d25-4f3e-8860-192c60a97ad8" podNamespace="kube-system" podName="kube-proxy-vdnsn"
	Mar 18 13:18:28 multinode-229365 kubelet[3086]: I0318 13:18:28.991361    3086 topology_manager.go:215] "Topology Admit Handler" podUID="e9702ef6-2066-470d-a8c9-d0857dc8b63a" podNamespace="kube-system" podName="storage-provisioner"
	Mar 18 13:18:28 multinode-229365 kubelet[3086]: I0318 13:18:28.991552    3086 topology_manager.go:215] "Topology Admit Handler" podUID="e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d" podNamespace="default" podName="busybox-5b5d89c9d6-cc5z6"
	Mar 18 13:18:29 multinode-229365 kubelet[3086]: I0318 13:18:29.005176    3086 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 18 13:18:29 multinode-229365 kubelet[3086]: I0318 13:18:29.093033    3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a92bfa0e-6f47-44a9-a32c-9628f567e5bc-xtables-lock\") pod \"kindnet-xcffd\" (UID: \"a92bfa0e-6f47-44a9-a32c-9628f567e5bc\") " pod="kube-system/kindnet-xcffd"
	Mar 18 13:18:29 multinode-229365 kubelet[3086]: I0318 13:18:29.093405    3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a92bfa0e-6f47-44a9-a32c-9628f567e5bc-lib-modules\") pod \"kindnet-xcffd\" (UID: \"a92bfa0e-6f47-44a9-a32c-9628f567e5bc\") " pod="kube-system/kindnet-xcffd"
	Mar 18 13:18:29 multinode-229365 kubelet[3086]: I0318 13:18:29.094382    3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e762a2b-2d25-4f3e-8860-192c60a97ad8-xtables-lock\") pod \"kube-proxy-vdnsn\" (UID: \"6e762a2b-2d25-4f3e-8860-192c60a97ad8\") " pod="kube-system/kube-proxy-vdnsn"
	Mar 18 13:18:29 multinode-229365 kubelet[3086]: I0318 13:18:29.094763    3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a92bfa0e-6f47-44a9-a32c-9628f567e5bc-cni-cfg\") pod \"kindnet-xcffd\" (UID: \"a92bfa0e-6f47-44a9-a32c-9628f567e5bc\") " pod="kube-system/kindnet-xcffd"
	Mar 18 13:18:29 multinode-229365 kubelet[3086]: I0318 13:18:29.094974    3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e9702ef6-2066-470d-a8c9-d0857dc8b63a-tmp\") pod \"storage-provisioner\" (UID: \"e9702ef6-2066-470d-a8c9-d0857dc8b63a\") " pod="kube-system/storage-provisioner"
	Mar 18 13:18:29 multinode-229365 kubelet[3086]: I0318 13:18:29.096057    3086 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6e762a2b-2d25-4f3e-8860-192c60a97ad8-lib-modules\") pod \"kube-proxy-vdnsn\" (UID: \"6e762a2b-2d25-4f3e-8860-192c60a97ad8\") " pod="kube-system/kube-proxy-vdnsn"
	Mar 18 13:18:29 multinode-229365 kubelet[3086]: E0318 13:18:29.186172    3086 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-229365\" already exists" pod="kube-system/kube-apiserver-multinode-229365"
	Mar 18 13:19:25 multinode-229365 kubelet[3086]: E0318 13:19:25.108712    3086 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d/crio-bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46: Error finding container bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46: Status 404 returned error can't find the container with id bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46
	Mar 18 13:19:25 multinode-229365 kubelet[3086]: E0318 13:19:25.109217    3086 manager.go:1106] Failed to create existing container: /kubepods/poda92bfa0e-6f47-44a9-a32c-9628f567e5bc/crio-f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b: Error finding container f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b: Status 404 returned error can't find the container with id f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b
	Mar 18 13:19:25 multinode-229365 kubelet[3086]: E0318 13:19:25.109625    3086 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode9702ef6-2066-470d-a8c9-d0857dc8b63a/crio-2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63: Error finding container 2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63: Status 404 returned error can't find the container with id 2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63
	Mar 18 13:19:25 multinode-229365 kubelet[3086]: E0318 13:19:25.109985    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod47b9b389eeab8ea23a39be0a8c622392/crio-c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5: Error finding container c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5: Status 404 returned error can't find the container with id c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5
	Mar 18 13:19:25 multinode-229365 kubelet[3086]: E0318 13:19:25.110401    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod66e2564c3f5ce1cdf5c73a3d12c95511/crio-c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07: Error finding container c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07: Status 404 returned error can't find the container with id c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07
	Mar 18 13:19:25 multinode-229365 kubelet[3086]: E0318 13:19:25.110757    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod3c6e16db-16e4-468f-919c-df4c54cf0e94/crio-54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e: Error finding container 54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e: Status 404 returned error can't find the container with id 54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e
	Mar 18 13:19:25 multinode-229365 kubelet[3086]: E0318 13:19:25.111239    3086 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod6e762a2b-2d25-4f3e-8860-192c60a97ad8/crio-f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15: Error finding container f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15: Status 404 returned error can't find the container with id f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15
	Mar 18 13:19:25 multinode-229365 kubelet[3086]: E0318 13:19:25.111672    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod131ed49275b5405a33eedc6996906d41/crio-74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d: Error finding container 74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d: Status 404 returned error can't find the container with id 74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d
	Mar 18 13:19:25 multinode-229365 kubelet[3086]: E0318 13:19:25.112138    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod326c3bfa26902a35a907a995f7624593/crio-22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960: Error finding container 22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960: Status 404 returned error can't find the container with id 22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960
	Mar 18 13:19:25 multinode-229365 kubelet[3086]: E0318 13:19:25.114737    3086 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:19:25 multinode-229365 kubelet[3086]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:19:25 multinode-229365 kubelet[3086]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:19:25 multinode-229365 kubelet[3086]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:19:25 multinode-229365 kubelet[3086]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:19:50.215960 1142723 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18429-1106816/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-229365 -n multinode-229365
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-229365 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (306.03s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 stop
E0318 13:21:24.906641 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-229365 stop: exit status 82 (2m0.478755615s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-229365-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-229365 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-229365 status: exit status 3 (18.865706953s)

                                                
                                                
-- stdout --
	multinode-229365
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-229365-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:22:13.996681 1143265 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.29:22: connect: no route to host
	E0318 13:22:13.996718 1143265 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.29:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-229365 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-229365 -n multinode-229365
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-229365 logs -n 25: (1.680473798s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m02:/home/docker/cp-test.txt                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365:/home/docker/cp-test_multinode-229365-m02_multinode-229365.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n multinode-229365 sudo cat                                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_multinode-229365-m02_multinode-229365.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m02:/home/docker/cp-test.txt                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03:/home/docker/cp-test_multinode-229365-m02_multinode-229365-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n multinode-229365-m03 sudo cat                                   | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_multinode-229365-m02_multinode-229365-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp testdata/cp-test.txt                                                | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m03:/home/docker/cp-test.txt                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile690292982/001/cp-test_multinode-229365-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m03:/home/docker/cp-test.txt                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365:/home/docker/cp-test_multinode-229365-m03_multinode-229365.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n multinode-229365 sudo cat                                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_multinode-229365-m03_multinode-229365.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m03:/home/docker/cp-test.txt                       | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m02:/home/docker/cp-test_multinode-229365-m03_multinode-229365-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n multinode-229365-m02 sudo cat                                   | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_multinode-229365-m03_multinode-229365-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-229365 node stop m03                                                          | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	| node    | multinode-229365 node start                                                             | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-229365                                                                | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC |                     |
	| stop    | -p multinode-229365                                                                     | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC |                     |
	| start   | -p multinode-229365                                                                     | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:16 UTC | 18 Mar 24 13:19 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-229365                                                                | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:19 UTC |                     |
	| node    | multinode-229365 node delete                                                            | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:19 UTC | 18 Mar 24 13:19 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-229365 stop                                                                   | multinode-229365 | jenkins | v1.32.0 | 18 Mar 24 13:19 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:16:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:16:48.995649 1141442 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:16:48.995763 1141442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:16:48.995771 1141442 out.go:304] Setting ErrFile to fd 2...
	I0318 13:16:48.995775 1141442 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:16:48.995962 1141442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:16:48.996547 1141442 out.go:298] Setting JSON to false
	I0318 13:16:48.997544 1141442 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":17956,"bootTime":1710749853,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:16:48.997612 1141442 start.go:139] virtualization: kvm guest
	I0318 13:16:49.000387 1141442 out.go:177] * [multinode-229365] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:16:49.002016 1141442 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:16:49.002056 1141442 notify.go:220] Checking for updates...
	I0318 13:16:49.003467 1141442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:16:49.005122 1141442 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:16:49.006459 1141442 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:16:49.007708 1141442 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:16:49.008971 1141442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:16:49.010594 1141442 config.go:182] Loaded profile config "multinode-229365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:16:49.010741 1141442 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:16:49.011209 1141442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:49.011257 1141442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:49.026401 1141442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41259
	I0318 13:16:49.026861 1141442 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:49.027386 1141442 main.go:141] libmachine: Using API Version  1
	I0318 13:16:49.027406 1141442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:49.027794 1141442 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:49.028022 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:16:49.062220 1141442 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:16:49.063405 1141442 start.go:297] selected driver: kvm2
	I0318 13:16:49.063418 1141442 start.go:901] validating driver "kvm2" against &{Name:multinode-229365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-229365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.34 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:16:49.063551 1141442 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:16:49.063898 1141442 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:16:49.063976 1141442 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:16:49.078514 1141442 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:16:49.079470 1141442 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:16:49.079557 1141442 cni.go:84] Creating CNI manager for ""
	I0318 13:16:49.079575 1141442 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:16:49.079647 1141442 start.go:340] cluster config:
	{Name:multinode-229365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-229365 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.34 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:16:49.079836 1141442 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:16:49.081765 1141442 out.go:177] * Starting "multinode-229365" primary control-plane node in "multinode-229365" cluster
	I0318 13:16:49.082943 1141442 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:16:49.082977 1141442 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:16:49.082984 1141442 cache.go:56] Caching tarball of preloaded images
	I0318 13:16:49.083054 1141442 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:16:49.083065 1141442 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:16:49.083177 1141442 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/config.json ...
	I0318 13:16:49.083357 1141442 start.go:360] acquireMachinesLock for multinode-229365: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:16:49.083404 1141442 start.go:364] duration metric: took 29.291µs to acquireMachinesLock for "multinode-229365"
	I0318 13:16:49.083419 1141442 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:16:49.083427 1141442 fix.go:54] fixHost starting: 
	I0318 13:16:49.083689 1141442 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:16:49.083721 1141442 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:16:49.097816 1141442 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45765
	I0318 13:16:49.098259 1141442 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:16:49.098792 1141442 main.go:141] libmachine: Using API Version  1
	I0318 13:16:49.098811 1141442 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:16:49.099147 1141442 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:16:49.099405 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:16:49.099567 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetState
	I0318 13:16:49.101304 1141442 fix.go:112] recreateIfNeeded on multinode-229365: state=Running err=<nil>
	W0318 13:16:49.101322 1141442 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:16:49.103862 1141442 out.go:177] * Updating the running kvm2 "multinode-229365" VM ...
	I0318 13:16:49.105236 1141442 machine.go:94] provisionDockerMachine start ...
	I0318 13:16:49.105262 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:16:49.105474 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.107935 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.108318 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.108371 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.108523 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:16:49.108687 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.108836 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.108988 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:16:49.109152 1141442 main.go:141] libmachine: Using SSH client type: native
	I0318 13:16:49.109348 1141442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0318 13:16:49.109360 1141442 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:16:49.230253 1141442 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-229365
	
	I0318 13:16:49.230284 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetMachineName
	I0318 13:16:49.230543 1141442 buildroot.go:166] provisioning hostname "multinode-229365"
	I0318 13:16:49.230574 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetMachineName
	I0318 13:16:49.230753 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.233213 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.233646 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.233674 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.233832 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:16:49.234023 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.234185 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.234340 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:16:49.234526 1141442 main.go:141] libmachine: Using SSH client type: native
	I0318 13:16:49.234708 1141442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0318 13:16:49.234722 1141442 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-229365 && echo "multinode-229365" | sudo tee /etc/hostname
	I0318 13:16:49.368280 1141442 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-229365
	
	I0318 13:16:49.368307 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.371006 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.371328 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.371370 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.371519 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:16:49.371732 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.371916 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.372056 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:16:49.372214 1141442 main.go:141] libmachine: Using SSH client type: native
	I0318 13:16:49.372415 1141442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0318 13:16:49.372433 1141442 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-229365' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-229365/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-229365' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:16:49.486087 1141442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:16:49.486120 1141442 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:16:49.486137 1141442 buildroot.go:174] setting up certificates
	I0318 13:16:49.486147 1141442 provision.go:84] configureAuth start
	I0318 13:16:49.486157 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetMachineName
	I0318 13:16:49.486442 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetIP
	I0318 13:16:49.489153 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.489536 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.489558 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.489680 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.491759 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.492122 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.492157 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.492271 1141442 provision.go:143] copyHostCerts
	I0318 13:16:49.492309 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:16:49.492368 1141442 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:16:49.492381 1141442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:16:49.492453 1141442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:16:49.492545 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:16:49.492574 1141442 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:16:49.492581 1141442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:16:49.492608 1141442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:16:49.492664 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:16:49.492680 1141442 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:16:49.492686 1141442 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:16:49.492725 1141442 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:16:49.492786 1141442 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.multinode-229365 san=[127.0.0.1 192.168.39.156 localhost minikube multinode-229365]
	I0318 13:16:49.636859 1141442 provision.go:177] copyRemoteCerts
	I0318 13:16:49.636933 1141442 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:16:49.636960 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.639431 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.639822 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.639854 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.639993 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:16:49.640176 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.640346 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:16:49.640526 1141442 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/multinode-229365/id_rsa Username:docker}
	I0318 13:16:49.727540 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0318 13:16:49.727622 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:16:49.757017 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0318 13:16:49.757083 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0318 13:16:49.785812 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0318 13:16:49.785889 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:16:49.826057 1141442 provision.go:87] duration metric: took 339.898381ms to configureAuth
	I0318 13:16:49.826088 1141442 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:16:49.826345 1141442 config.go:182] Loaded profile config "multinode-229365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:16:49.826468 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:16:49.829111 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.829483 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:16:49.829515 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:16:49.829676 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:16:49.829856 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.830034 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:16:49.830156 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:16:49.830295 1141442 main.go:141] libmachine: Using SSH client type: native
	I0318 13:16:49.830508 1141442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0318 13:16:49.830532 1141442 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:18:20.795134 1141442 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:18:20.795167 1141442 machine.go:97] duration metric: took 1m31.689914979s to provisionDockerMachine
	I0318 13:18:20.795184 1141442 start.go:293] postStartSetup for "multinode-229365" (driver="kvm2")
	I0318 13:18:20.795201 1141442 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:18:20.795227 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:18:20.795643 1141442 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:18:20.795686 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:18:20.799253 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:20.799607 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:20.799641 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:20.799825 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:18:20.800004 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:18:20.800154 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:18:20.800274 1141442 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/multinode-229365/id_rsa Username:docker}
	I0318 13:18:20.890032 1141442 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:18:20.894791 1141442 command_runner.go:130] > NAME=Buildroot
	I0318 13:18:20.894806 1141442 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0318 13:18:20.894810 1141442 command_runner.go:130] > ID=buildroot
	I0318 13:18:20.894815 1141442 command_runner.go:130] > VERSION_ID=2023.02.9
	I0318 13:18:20.894819 1141442 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0318 13:18:20.894858 1141442 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:18:20.894868 1141442 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:18:20.894923 1141442 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:18:20.895012 1141442 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:18:20.895027 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /etc/ssl/certs/11141362.pem
	I0318 13:18:20.895107 1141442 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:18:20.906001 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:18:20.942130 1141442 start.go:296] duration metric: took 146.927864ms for postStartSetup
	I0318 13:18:20.942229 1141442 fix.go:56] duration metric: took 1m31.858795919s for fixHost
	I0318 13:18:20.942264 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:18:20.945363 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:20.945810 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:20.945844 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:20.945999 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:18:20.946219 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:18:20.946403 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:18:20.946552 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:18:20.946721 1141442 main.go:141] libmachine: Using SSH client type: native
	I0318 13:18:20.946906 1141442 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.156 22 <nil> <nil>}
	I0318 13:18:20.946917 1141442 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:18:21.065411 1141442 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710767901.042165387
	
	I0318 13:18:21.065436 1141442 fix.go:216] guest clock: 1710767901.042165387
	I0318 13:18:21.065444 1141442 fix.go:229] Guest: 2024-03-18 13:18:21.042165387 +0000 UTC Remote: 2024-03-18 13:18:20.942240728 +0000 UTC m=+91.997198087 (delta=99.924659ms)
	I0318 13:18:21.065478 1141442 fix.go:200] guest clock delta is within tolerance: 99.924659ms
	I0318 13:18:21.065486 1141442 start.go:83] releasing machines lock for "multinode-229365", held for 1m31.982073828s
	I0318 13:18:21.065508 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:18:21.065795 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetIP
	I0318 13:18:21.068395 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.068780 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:21.068803 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.069024 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:18:21.069614 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:18:21.069811 1141442 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:18:21.069933 1141442 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:18:21.069989 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:18:21.070012 1141442 ssh_runner.go:195] Run: cat /version.json
	I0318 13:18:21.070034 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:18:21.072498 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.072704 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.072869 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:21.072908 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.073059 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:21.073077 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:21.073093 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:18:21.073282 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:18:21.073284 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:18:21.073499 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:18:21.073503 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:18:21.073674 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:18:21.073671 1141442 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/multinode-229365/id_rsa Username:docker}
	I0318 13:18:21.073822 1141442 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/multinode-229365/id_rsa Username:docker}
	I0318 13:18:21.154411 1141442 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0318 13:18:21.154669 1141442 ssh_runner.go:195] Run: systemctl --version
	I0318 13:18:21.177544 1141442 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0318 13:18:21.177584 1141442 command_runner.go:130] > systemd 252 (252)
	I0318 13:18:21.177620 1141442 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0318 13:18:21.177682 1141442 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:18:21.345230 1141442 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0318 13:18:21.354111 1141442 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0318 13:18:21.354531 1141442 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:18:21.354578 1141442 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:18:21.364920 1141442 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 13:18:21.364938 1141442 start.go:494] detecting cgroup driver to use...
	I0318 13:18:21.365009 1141442 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:18:21.381721 1141442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:18:21.397132 1141442 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:18:21.397176 1141442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:18:21.413016 1141442 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:18:21.428564 1141442 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:18:21.621619 1141442 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:18:21.768005 1141442 docker.go:233] disabling docker service ...
	I0318 13:18:21.768076 1141442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:18:21.785019 1141442 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:18:21.799512 1141442 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:18:21.947888 1141442 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:18:22.095600 1141442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:18:22.110755 1141442 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:18:22.133051 1141442 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0318 13:18:22.133103 1141442 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:18:22.133163 1141442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:18:22.145241 1141442 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:18:22.145302 1141442 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:18:22.156819 1141442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:18:22.168163 1141442 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:18:22.179356 1141442 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:18:22.190845 1141442 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:18:22.200812 1141442 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0318 13:18:22.201045 1141442 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:18:22.211270 1141442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:18:22.358796 1141442 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:18:22.631120 1141442 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:18:22.631186 1141442 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:18:22.636745 1141442 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0318 13:18:22.636766 1141442 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0318 13:18:22.636773 1141442 command_runner.go:130] > Device: 0,22	Inode: 1329        Links: 1
	I0318 13:18:22.636779 1141442 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:18:22.636784 1141442 command_runner.go:130] > Access: 2024-03-18 13:18:22.496511143 +0000
	I0318 13:18:22.636790 1141442 command_runner.go:130] > Modify: 2024-03-18 13:18:22.496511143 +0000
	I0318 13:18:22.636805 1141442 command_runner.go:130] > Change: 2024-03-18 13:18:22.496511143 +0000
	I0318 13:18:22.636819 1141442 command_runner.go:130] >  Birth: -
	I0318 13:18:22.637137 1141442 start.go:562] Will wait 60s for crictl version
	I0318 13:18:22.637188 1141442 ssh_runner.go:195] Run: which crictl
	I0318 13:18:22.641508 1141442 command_runner.go:130] > /usr/bin/crictl
	I0318 13:18:22.641646 1141442 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:18:22.682085 1141442 command_runner.go:130] > Version:  0.1.0
	I0318 13:18:22.682114 1141442 command_runner.go:130] > RuntimeName:  cri-o
	I0318 13:18:22.682121 1141442 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0318 13:18:22.682130 1141442 command_runner.go:130] > RuntimeApiVersion:  v1
	I0318 13:18:22.683255 1141442 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:18:22.683337 1141442 ssh_runner.go:195] Run: crio --version
	I0318 13:18:22.713325 1141442 command_runner.go:130] > crio version 1.29.1
	I0318 13:18:22.713347 1141442 command_runner.go:130] > Version:        1.29.1
	I0318 13:18:22.713355 1141442 command_runner.go:130] > GitCommit:      unknown
	I0318 13:18:22.713362 1141442 command_runner.go:130] > GitCommitDate:  unknown
	I0318 13:18:22.713374 1141442 command_runner.go:130] > GitTreeState:   clean
	I0318 13:18:22.713383 1141442 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0318 13:18:22.713388 1141442 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 13:18:22.713394 1141442 command_runner.go:130] > Compiler:       gc
	I0318 13:18:22.713400 1141442 command_runner.go:130] > Platform:       linux/amd64
	I0318 13:18:22.713411 1141442 command_runner.go:130] > Linkmode:       dynamic
	I0318 13:18:22.713420 1141442 command_runner.go:130] > BuildTags:      
	I0318 13:18:22.713429 1141442 command_runner.go:130] >   containers_image_ostree_stub
	I0318 13:18:22.713439 1141442 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 13:18:22.713449 1141442 command_runner.go:130] >   btrfs_noversion
	I0318 13:18:22.713460 1141442 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 13:18:22.713469 1141442 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 13:18:22.713477 1141442 command_runner.go:130] >   seccomp
	I0318 13:18:22.713486 1141442 command_runner.go:130] > LDFlags:          unknown
	I0318 13:18:22.713496 1141442 command_runner.go:130] > SeccompEnabled:   true
	I0318 13:18:22.713504 1141442 command_runner.go:130] > AppArmorEnabled:  false
	I0318 13:18:22.713589 1141442 ssh_runner.go:195] Run: crio --version
	I0318 13:18:22.744042 1141442 command_runner.go:130] > crio version 1.29.1
	I0318 13:18:22.744067 1141442 command_runner.go:130] > Version:        1.29.1
	I0318 13:18:22.744076 1141442 command_runner.go:130] > GitCommit:      unknown
	I0318 13:18:22.744083 1141442 command_runner.go:130] > GitCommitDate:  unknown
	I0318 13:18:22.744088 1141442 command_runner.go:130] > GitTreeState:   clean
	I0318 13:18:22.744097 1141442 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0318 13:18:22.744102 1141442 command_runner.go:130] > GoVersion:      go1.21.6
	I0318 13:18:22.744108 1141442 command_runner.go:130] > Compiler:       gc
	I0318 13:18:22.744115 1141442 command_runner.go:130] > Platform:       linux/amd64
	I0318 13:18:22.744122 1141442 command_runner.go:130] > Linkmode:       dynamic
	I0318 13:18:22.744133 1141442 command_runner.go:130] > BuildTags:      
	I0318 13:18:22.744141 1141442 command_runner.go:130] >   containers_image_ostree_stub
	I0318 13:18:22.744150 1141442 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0318 13:18:22.744175 1141442 command_runner.go:130] >   btrfs_noversion
	I0318 13:18:22.744187 1141442 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0318 13:18:22.744194 1141442 command_runner.go:130] >   libdm_no_deferred_remove
	I0318 13:18:22.744201 1141442 command_runner.go:130] >   seccomp
	I0318 13:18:22.744212 1141442 command_runner.go:130] > LDFlags:          unknown
	I0318 13:18:22.744221 1141442 command_runner.go:130] > SeccompEnabled:   true
	I0318 13:18:22.744229 1141442 command_runner.go:130] > AppArmorEnabled:  false
	I0318 13:18:22.747434 1141442 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:18:22.749136 1141442 main.go:141] libmachine: (multinode-229365) Calling .GetIP
	I0318 13:18:22.751727 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:22.752109 1141442 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:18:22.752142 1141442 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:18:22.752297 1141442 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:18:22.757295 1141442 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0318 13:18:22.757477 1141442 kubeadm.go:877] updating cluster {Name:multinode-229365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-229365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.34 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:18:22.757667 1141442 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:18:22.757744 1141442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:18:22.816488 1141442 command_runner.go:130] > {
	I0318 13:18:22.816513 1141442 command_runner.go:130] >   "images": [
	I0318 13:18:22.816517 1141442 command_runner.go:130] >     {
	I0318 13:18:22.816534 1141442 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 13:18:22.816542 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.816555 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 13:18:22.816560 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816567 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.816580 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 13:18:22.816593 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 13:18:22.816597 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816601 1141442 command_runner.go:130] >       "size": "65258016",
	I0318 13:18:22.816611 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.816618 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.816629 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.816640 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.816648 1141442 command_runner.go:130] >     },
	I0318 13:18:22.816653 1141442 command_runner.go:130] >     {
	I0318 13:18:22.816663 1141442 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 13:18:22.816667 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.816672 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 13:18:22.816678 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816682 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.816691 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 13:18:22.816701 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 13:18:22.816710 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816717 1141442 command_runner.go:130] >       "size": "65291810",
	I0318 13:18:22.816726 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.816738 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.816748 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.816754 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.816763 1141442 command_runner.go:130] >     },
	I0318 13:18:22.816769 1141442 command_runner.go:130] >     {
	I0318 13:18:22.816785 1141442 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 13:18:22.816792 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.816797 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 13:18:22.816803 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816812 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.816821 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 13:18:22.816831 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 13:18:22.816837 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816841 1141442 command_runner.go:130] >       "size": "1363676",
	I0318 13:18:22.816845 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.816851 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.816854 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.816859 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.816868 1141442 command_runner.go:130] >     },
	I0318 13:18:22.816873 1141442 command_runner.go:130] >     {
	I0318 13:18:22.816885 1141442 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 13:18:22.816895 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.816904 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 13:18:22.816913 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816920 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.816936 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 13:18:22.816957 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 13:18:22.816976 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.816980 1141442 command_runner.go:130] >       "size": "31470524",
	I0318 13:18:22.816984 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.816988 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.816991 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.816995 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.816998 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817001 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817007 1141442 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 13:18:22.817011 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817015 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 13:18:22.817018 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817022 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817029 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 13:18:22.817043 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 13:18:22.817049 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817053 1141442 command_runner.go:130] >       "size": "53621675",
	I0318 13:18:22.817056 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.817060 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817064 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817068 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817074 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817080 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817088 1141442 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 13:18:22.817093 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817097 1141442 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 13:18:22.817101 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817105 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817113 1141442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 13:18:22.817120 1141442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 13:18:22.817125 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817129 1141442 command_runner.go:130] >       "size": "295456551",
	I0318 13:18:22.817135 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.817139 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.817145 1141442 command_runner.go:130] >       },
	I0318 13:18:22.817149 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817153 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817157 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817160 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817163 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817169 1141442 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 13:18:22.817175 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817180 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 13:18:22.817183 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817187 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817201 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 13:18:22.817210 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 13:18:22.817214 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817218 1141442 command_runner.go:130] >       "size": "127226832",
	I0318 13:18:22.817221 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.817229 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.817235 1141442 command_runner.go:130] >       },
	I0318 13:18:22.817239 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817243 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817246 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817249 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817252 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817258 1141442 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 13:18:22.817265 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817270 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 13:18:22.817277 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817281 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817305 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 13:18:22.817316 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 13:18:22.817319 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817323 1141442 command_runner.go:130] >       "size": "123261750",
	I0318 13:18:22.817329 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.817333 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.817337 1141442 command_runner.go:130] >       },
	I0318 13:18:22.817341 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817344 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817348 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817351 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817354 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817360 1141442 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 13:18:22.817365 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817370 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 13:18:22.817374 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817377 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817384 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 13:18:22.817391 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 13:18:22.817394 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817398 1141442 command_runner.go:130] >       "size": "74749335",
	I0318 13:18:22.817401 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.817405 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817408 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817417 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817420 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817423 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817428 1141442 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 13:18:22.817432 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817437 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 13:18:22.817440 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817444 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817450 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 13:18:22.817457 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 13:18:22.817462 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817466 1141442 command_runner.go:130] >       "size": "61551410",
	I0318 13:18:22.817470 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.817475 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.817478 1141442 command_runner.go:130] >       },
	I0318 13:18:22.817482 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817488 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817492 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.817498 1141442 command_runner.go:130] >     },
	I0318 13:18:22.817501 1141442 command_runner.go:130] >     {
	I0318 13:18:22.817507 1141442 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 13:18:22.817513 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.817517 1141442 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 13:18:22.817520 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817524 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.817531 1141442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 13:18:22.817538 1141442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 13:18:22.817543 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.817547 1141442 command_runner.go:130] >       "size": "750414",
	I0318 13:18:22.817553 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.817557 1141442 command_runner.go:130] >         "value": "65535"
	I0318 13:18:22.817560 1141442 command_runner.go:130] >       },
	I0318 13:18:22.817564 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.817568 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.817577 1141442 command_runner.go:130] >       "pinned": true
	I0318 13:18:22.817582 1141442 command_runner.go:130] >     }
	I0318 13:18:22.817590 1141442 command_runner.go:130] >   ]
	I0318 13:18:22.817596 1141442 command_runner.go:130] > }
	I0318 13:18:22.817794 1141442 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:18:22.817807 1141442 crio.go:415] Images already preloaded, skipping extraction
	I0318 13:18:22.817854 1141442 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:18:22.858020 1141442 command_runner.go:130] > {
	I0318 13:18:22.858044 1141442 command_runner.go:130] >   "images": [
	I0318 13:18:22.858048 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858058 1141442 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0318 13:18:22.858063 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858068 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0318 13:18:22.858072 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858076 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858089 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0318 13:18:22.858112 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0318 13:18:22.858124 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858130 1141442 command_runner.go:130] >       "size": "65258016",
	I0318 13:18:22.858135 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.858142 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858150 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858157 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858161 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858167 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858176 1141442 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0318 13:18:22.858185 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858193 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0318 13:18:22.858202 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858209 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858223 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0318 13:18:22.858232 1141442 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0318 13:18:22.858236 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858240 1141442 command_runner.go:130] >       "size": "65291810",
	I0318 13:18:22.858248 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.858262 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858279 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858288 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858294 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858301 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858312 1141442 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0318 13:18:22.858322 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858330 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0318 13:18:22.858338 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858343 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858365 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0318 13:18:22.858380 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0318 13:18:22.858386 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858397 1141442 command_runner.go:130] >       "size": "1363676",
	I0318 13:18:22.858404 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.858414 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858424 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858434 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858442 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858448 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858454 1141442 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0318 13:18:22.858464 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858473 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0318 13:18:22.858482 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858489 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858504 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0318 13:18:22.858529 1141442 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0318 13:18:22.858541 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858548 1141442 command_runner.go:130] >       "size": "31470524",
	I0318 13:18:22.858554 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.858560 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858566 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858576 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858582 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858591 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858601 1141442 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0318 13:18:22.858610 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858625 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0318 13:18:22.858634 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858640 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858651 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0318 13:18:22.858666 1141442 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0318 13:18:22.858675 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858682 1141442 command_runner.go:130] >       "size": "53621675",
	I0318 13:18:22.858693 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.858702 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858710 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858719 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858727 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858733 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858746 1141442 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0318 13:18:22.858753 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858759 1141442 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0318 13:18:22.858768 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858775 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858789 1141442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0318 13:18:22.858804 1141442 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0318 13:18:22.858812 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858820 1141442 command_runner.go:130] >       "size": "295456551",
	I0318 13:18:22.858829 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.858833 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.858843 1141442 command_runner.go:130] >       },
	I0318 13:18:22.858850 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.858857 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.858867 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.858872 1141442 command_runner.go:130] >     },
	I0318 13:18:22.858882 1141442 command_runner.go:130] >     {
	I0318 13:18:22.858892 1141442 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0318 13:18:22.858901 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.858913 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0318 13:18:22.858919 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858928 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.858938 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0318 13:18:22.858961 1141442 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0318 13:18:22.858971 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.858978 1141442 command_runner.go:130] >       "size": "127226832",
	I0318 13:18:22.858987 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.858996 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.859002 1141442 command_runner.go:130] >       },
	I0318 13:18:22.859011 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.859019 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.859028 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.859034 1141442 command_runner.go:130] >     },
	I0318 13:18:22.859041 1141442 command_runner.go:130] >     {
	I0318 13:18:22.859047 1141442 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0318 13:18:22.859056 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.859065 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0318 13:18:22.859075 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859082 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.859113 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0318 13:18:22.859129 1141442 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0318 13:18:22.859137 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859143 1141442 command_runner.go:130] >       "size": "123261750",
	I0318 13:18:22.859150 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.859155 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.859163 1141442 command_runner.go:130] >       },
	I0318 13:18:22.859170 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.859180 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.859186 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.859196 1141442 command_runner.go:130] >     },
	I0318 13:18:22.859202 1141442 command_runner.go:130] >     {
	I0318 13:18:22.859215 1141442 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0318 13:18:22.859224 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.859232 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0318 13:18:22.859241 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859248 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.859258 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0318 13:18:22.859274 1141442 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0318 13:18:22.859286 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859299 1141442 command_runner.go:130] >       "size": "74749335",
	I0318 13:18:22.859309 1141442 command_runner.go:130] >       "uid": null,
	I0318 13:18:22.859316 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.859325 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.859331 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.859339 1141442 command_runner.go:130] >     },
	I0318 13:18:22.859345 1141442 command_runner.go:130] >     {
	I0318 13:18:22.859359 1141442 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0318 13:18:22.859363 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.859374 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0318 13:18:22.859380 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859388 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.859401 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0318 13:18:22.859416 1141442 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0318 13:18:22.859425 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859431 1141442 command_runner.go:130] >       "size": "61551410",
	I0318 13:18:22.859440 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.859444 1141442 command_runner.go:130] >         "value": "0"
	I0318 13:18:22.859448 1141442 command_runner.go:130] >       },
	I0318 13:18:22.859454 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.859461 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.859472 1141442 command_runner.go:130] >       "pinned": false
	I0318 13:18:22.859478 1141442 command_runner.go:130] >     },
	I0318 13:18:22.859487 1141442 command_runner.go:130] >     {
	I0318 13:18:22.859497 1141442 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0318 13:18:22.859507 1141442 command_runner.go:130] >       "repoTags": [
	I0318 13:18:22.859515 1141442 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0318 13:18:22.859523 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859530 1141442 command_runner.go:130] >       "repoDigests": [
	I0318 13:18:22.859543 1141442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0318 13:18:22.859552 1141442 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0318 13:18:22.859558 1141442 command_runner.go:130] >       ],
	I0318 13:18:22.859569 1141442 command_runner.go:130] >       "size": "750414",
	I0318 13:18:22.859579 1141442 command_runner.go:130] >       "uid": {
	I0318 13:18:22.859586 1141442 command_runner.go:130] >         "value": "65535"
	I0318 13:18:22.859594 1141442 command_runner.go:130] >       },
	I0318 13:18:22.859607 1141442 command_runner.go:130] >       "username": "",
	I0318 13:18:22.859616 1141442 command_runner.go:130] >       "spec": null,
	I0318 13:18:22.859623 1141442 command_runner.go:130] >       "pinned": true
	I0318 13:18:22.859630 1141442 command_runner.go:130] >     }
	I0318 13:18:22.859634 1141442 command_runner.go:130] >   ]
	I0318 13:18:22.859637 1141442 command_runner.go:130] > }
	I0318 13:18:22.859778 1141442 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:18:22.859791 1141442 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:18:22.859798 1141442 kubeadm.go:928] updating node { 192.168.39.156 8443 v1.28.4 crio true true} ...
	I0318 13:18:22.859950 1141442 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-229365 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.156
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-229365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:18:22.860048 1141442 ssh_runner.go:195] Run: crio config
	I0318 13:18:22.903754 1141442 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0318 13:18:22.903779 1141442 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0318 13:18:22.903786 1141442 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0318 13:18:22.903790 1141442 command_runner.go:130] > #
	I0318 13:18:22.903811 1141442 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0318 13:18:22.903821 1141442 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0318 13:18:22.903838 1141442 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0318 13:18:22.903856 1141442 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0318 13:18:22.903862 1141442 command_runner.go:130] > # reload'.
	I0318 13:18:22.903891 1141442 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0318 13:18:22.903904 1141442 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0318 13:18:22.903916 1141442 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0318 13:18:22.903928 1141442 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0318 13:18:22.903936 1141442 command_runner.go:130] > [crio]
	I0318 13:18:22.903945 1141442 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0318 13:18:22.903955 1141442 command_runner.go:130] > # containers images, in this directory.
	I0318 13:18:22.903964 1141442 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0318 13:18:22.903983 1141442 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0318 13:18:22.903996 1141442 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0318 13:18:22.904009 1141442 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0318 13:18:22.904017 1141442 command_runner.go:130] > # imagestore = ""
	I0318 13:18:22.904027 1141442 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0318 13:18:22.904035 1141442 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0318 13:18:22.904042 1141442 command_runner.go:130] > storage_driver = "overlay"
	I0318 13:18:22.904050 1141442 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0318 13:18:22.904059 1141442 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0318 13:18:22.904067 1141442 command_runner.go:130] > storage_option = [
	I0318 13:18:22.904074 1141442 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0318 13:18:22.904082 1141442 command_runner.go:130] > ]
	I0318 13:18:22.904092 1141442 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0318 13:18:22.904111 1141442 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0318 13:18:22.904122 1141442 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0318 13:18:22.904130 1141442 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0318 13:18:22.904140 1141442 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0318 13:18:22.904148 1141442 command_runner.go:130] > # always happen on a node reboot
	I0318 13:18:22.904159 1141442 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0318 13:18:22.904176 1141442 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0318 13:18:22.904188 1141442 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0318 13:18:22.904195 1141442 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0318 13:18:22.904206 1141442 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0318 13:18:22.904219 1141442 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0318 13:18:22.904234 1141442 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0318 13:18:22.904241 1141442 command_runner.go:130] > # internal_wipe = true
	I0318 13:18:22.904257 1141442 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0318 13:18:22.904269 1141442 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0318 13:18:22.904285 1141442 command_runner.go:130] > # internal_repair = false
	I0318 13:18:22.904297 1141442 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0318 13:18:22.904311 1141442 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0318 13:18:22.904336 1141442 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0318 13:18:22.904349 1141442 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0318 13:18:22.904362 1141442 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0318 13:18:22.904369 1141442 command_runner.go:130] > [crio.api]
	I0318 13:18:22.904378 1141442 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0318 13:18:22.904388 1141442 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0318 13:18:22.904399 1141442 command_runner.go:130] > # IP address on which the stream server will listen.
	I0318 13:18:22.904410 1141442 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0318 13:18:22.904421 1141442 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0318 13:18:22.904431 1141442 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0318 13:18:22.904438 1141442 command_runner.go:130] > # stream_port = "0"
	I0318 13:18:22.904450 1141442 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0318 13:18:22.904457 1141442 command_runner.go:130] > # stream_enable_tls = false
	I0318 13:18:22.904467 1141442 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0318 13:18:22.904476 1141442 command_runner.go:130] > # stream_idle_timeout = ""
	I0318 13:18:22.904486 1141442 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0318 13:18:22.904498 1141442 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0318 13:18:22.904507 1141442 command_runner.go:130] > # minutes.
	I0318 13:18:22.904517 1141442 command_runner.go:130] > # stream_tls_cert = ""
	I0318 13:18:22.904532 1141442 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0318 13:18:22.904544 1141442 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0318 13:18:22.904554 1141442 command_runner.go:130] > # stream_tls_key = ""
	I0318 13:18:22.904562 1141442 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0318 13:18:22.904571 1141442 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0318 13:18:22.904588 1141442 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0318 13:18:22.904594 1141442 command_runner.go:130] > # stream_tls_ca = ""
	I0318 13:18:22.904601 1141442 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 13:18:22.904608 1141442 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0318 13:18:22.904615 1141442 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0318 13:18:22.904622 1141442 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0318 13:18:22.904628 1141442 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0318 13:18:22.904638 1141442 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0318 13:18:22.904647 1141442 command_runner.go:130] > [crio.runtime]
	I0318 13:18:22.904662 1141442 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0318 13:18:22.904675 1141442 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0318 13:18:22.904684 1141442 command_runner.go:130] > # "nofile=1024:2048"
	I0318 13:18:22.904693 1141442 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0318 13:18:22.904704 1141442 command_runner.go:130] > # default_ulimits = [
	I0318 13:18:22.904709 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.904722 1141442 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0318 13:18:22.904729 1141442 command_runner.go:130] > # no_pivot = false
	I0318 13:18:22.904739 1141442 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0318 13:18:22.904750 1141442 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0318 13:18:22.904758 1141442 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0318 13:18:22.904763 1141442 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0318 13:18:22.904770 1141442 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0318 13:18:22.904776 1141442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 13:18:22.904783 1141442 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0318 13:18:22.904787 1141442 command_runner.go:130] > # Cgroup setting for conmon
	I0318 13:18:22.904796 1141442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0318 13:18:22.904800 1141442 command_runner.go:130] > conmon_cgroup = "pod"
	I0318 13:18:22.904807 1141442 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0318 13:18:22.904812 1141442 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0318 13:18:22.904821 1141442 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0318 13:18:22.904826 1141442 command_runner.go:130] > conmon_env = [
	I0318 13:18:22.904838 1141442 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 13:18:22.904851 1141442 command_runner.go:130] > ]
	I0318 13:18:22.904864 1141442 command_runner.go:130] > # Additional environment variables to set for all the
	I0318 13:18:22.904876 1141442 command_runner.go:130] > # containers. These are overridden if set in the
	I0318 13:18:22.904888 1141442 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0318 13:18:22.904897 1141442 command_runner.go:130] > # default_env = [
	I0318 13:18:22.904902 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.904914 1141442 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0318 13:18:22.904927 1141442 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0318 13:18:22.904937 1141442 command_runner.go:130] > # selinux = false
	I0318 13:18:22.904947 1141442 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0318 13:18:22.905694 1141442 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0318 13:18:22.905721 1141442 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0318 13:18:22.905729 1141442 command_runner.go:130] > # seccomp_profile = ""
	I0318 13:18:22.905739 1141442 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0318 13:18:22.905756 1141442 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0318 13:18:22.905767 1141442 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0318 13:18:22.905774 1141442 command_runner.go:130] > # which might increase security.
	I0318 13:18:22.905787 1141442 command_runner.go:130] > # This option is currently deprecated,
	I0318 13:18:22.905809 1141442 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0318 13:18:22.905819 1141442 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0318 13:18:22.905835 1141442 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0318 13:18:22.905845 1141442 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0318 13:18:22.905862 1141442 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0318 13:18:22.905878 1141442 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0318 13:18:22.905887 1141442 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:18:22.905894 1141442 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0318 13:18:22.905909 1141442 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0318 13:18:22.905916 1141442 command_runner.go:130] > # the cgroup blockio controller.
	I0318 13:18:22.905922 1141442 command_runner.go:130] > # blockio_config_file = ""
	I0318 13:18:22.905932 1141442 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0318 13:18:22.905944 1141442 command_runner.go:130] > # blockio parameters.
	I0318 13:18:22.905950 1141442 command_runner.go:130] > # blockio_reload = false
	I0318 13:18:22.905961 1141442 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0318 13:18:22.905967 1141442 command_runner.go:130] > # irqbalance daemon.
	I0318 13:18:22.905982 1141442 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0318 13:18:22.905991 1141442 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0318 13:18:22.906001 1141442 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0318 13:18:22.906018 1141442 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0318 13:18:22.906031 1141442 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0318 13:18:22.906047 1141442 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0318 13:18:22.906054 1141442 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:18:22.906060 1141442 command_runner.go:130] > # rdt_config_file = ""
	I0318 13:18:22.906068 1141442 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0318 13:18:22.906080 1141442 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0318 13:18:22.906116 1141442 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0318 13:18:22.906122 1141442 command_runner.go:130] > # separate_pull_cgroup = ""
	I0318 13:18:22.906138 1141442 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0318 13:18:22.906150 1141442 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0318 13:18:22.906156 1141442 command_runner.go:130] > # will be added.
	I0318 13:18:22.906162 1141442 command_runner.go:130] > # default_capabilities = [
	I0318 13:18:22.906168 1141442 command_runner.go:130] > # 	"CHOWN",
	I0318 13:18:22.906178 1141442 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0318 13:18:22.906184 1141442 command_runner.go:130] > # 	"FSETID",
	I0318 13:18:22.906189 1141442 command_runner.go:130] > # 	"FOWNER",
	I0318 13:18:22.906200 1141442 command_runner.go:130] > # 	"SETGID",
	I0318 13:18:22.906206 1141442 command_runner.go:130] > # 	"SETUID",
	I0318 13:18:22.906211 1141442 command_runner.go:130] > # 	"SETPCAP",
	I0318 13:18:22.906223 1141442 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0318 13:18:22.906229 1141442 command_runner.go:130] > # 	"KILL",
	I0318 13:18:22.906233 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906248 1141442 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0318 13:18:22.906262 1141442 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0318 13:18:22.906277 1141442 command_runner.go:130] > # add_inheritable_capabilities = false
	I0318 13:18:22.906286 1141442 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0318 13:18:22.906299 1141442 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 13:18:22.906305 1141442 command_runner.go:130] > # default_sysctls = [
	I0318 13:18:22.906310 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906316 1141442 command_runner.go:130] > # List of devices on the host that a
	I0318 13:18:22.906330 1141442 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0318 13:18:22.906335 1141442 command_runner.go:130] > # allowed_devices = [
	I0318 13:18:22.906341 1141442 command_runner.go:130] > # 	"/dev/fuse",
	I0318 13:18:22.906346 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906354 1141442 command_runner.go:130] > # List of additional devices. specified as
	I0318 13:18:22.906369 1141442 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0318 13:18:22.906377 1141442 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0318 13:18:22.906385 1141442 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0318 13:18:22.906396 1141442 command_runner.go:130] > # additional_devices = [
	I0318 13:18:22.906401 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906409 1141442 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0318 13:18:22.906414 1141442 command_runner.go:130] > # cdi_spec_dirs = [
	I0318 13:18:22.906430 1141442 command_runner.go:130] > # 	"/etc/cdi",
	I0318 13:18:22.906439 1141442 command_runner.go:130] > # 	"/var/run/cdi",
	I0318 13:18:22.906445 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906458 1141442 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0318 13:18:22.906468 1141442 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0318 13:18:22.906519 1141442 command_runner.go:130] > # Defaults to false.
	I0318 13:18:22.906549 1141442 command_runner.go:130] > # device_ownership_from_security_context = false
	I0318 13:18:22.906564 1141442 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0318 13:18:22.906582 1141442 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0318 13:18:22.906588 1141442 command_runner.go:130] > # hooks_dir = [
	I0318 13:18:22.906597 1141442 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0318 13:18:22.906601 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.906615 1141442 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0318 13:18:22.906623 1141442 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0318 13:18:22.906631 1141442 command_runner.go:130] > # its default mounts from the following two files:
	I0318 13:18:22.906635 1141442 command_runner.go:130] > #
	I0318 13:18:22.906649 1141442 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0318 13:18:22.906658 1141442 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0318 13:18:22.906666 1141442 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0318 13:18:22.906684 1141442 command_runner.go:130] > #
	I0318 13:18:22.906694 1141442 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0318 13:18:22.906706 1141442 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0318 13:18:22.906725 1141442 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0318 13:18:22.906737 1141442 command_runner.go:130] > #      only add mounts it finds in this file.
	I0318 13:18:22.906744 1141442 command_runner.go:130] > #
	I0318 13:18:22.906756 1141442 command_runner.go:130] > # default_mounts_file = ""
	I0318 13:18:22.906772 1141442 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0318 13:18:22.906791 1141442 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0318 13:18:22.906799 1141442 command_runner.go:130] > pids_limit = 1024
	I0318 13:18:22.906815 1141442 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0318 13:18:22.906828 1141442 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0318 13:18:22.906837 1141442 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0318 13:18:22.906862 1141442 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0318 13:18:22.906872 1141442 command_runner.go:130] > # log_size_max = -1
	I0318 13:18:22.906883 1141442 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0318 13:18:22.906897 1141442 command_runner.go:130] > # log_to_journald = false
	I0318 13:18:22.906907 1141442 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0318 13:18:22.906917 1141442 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0318 13:18:22.906930 1141442 command_runner.go:130] > # Path to directory for container attach sockets.
	I0318 13:18:22.906944 1141442 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0318 13:18:22.906956 1141442 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0318 13:18:22.906974 1141442 command_runner.go:130] > # bind_mount_prefix = ""
	I0318 13:18:22.906991 1141442 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0318 13:18:22.907000 1141442 command_runner.go:130] > # read_only = false
	I0318 13:18:22.907010 1141442 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0318 13:18:22.907024 1141442 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0318 13:18:22.907034 1141442 command_runner.go:130] > # live configuration reload.
	I0318 13:18:22.907040 1141442 command_runner.go:130] > # log_level = "info"
	I0318 13:18:22.907052 1141442 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0318 13:18:22.907068 1141442 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:18:22.907077 1141442 command_runner.go:130] > # log_filter = ""
	I0318 13:18:22.907086 1141442 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0318 13:18:22.907103 1141442 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0318 13:18:22.907112 1141442 command_runner.go:130] > # separated by comma.
	I0318 13:18:22.907124 1141442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:18:22.907133 1141442 command_runner.go:130] > # uid_mappings = ""
	I0318 13:18:22.907147 1141442 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0318 13:18:22.907157 1141442 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0318 13:18:22.907163 1141442 command_runner.go:130] > # separated by comma.
	I0318 13:18:22.907183 1141442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:18:22.907192 1141442 command_runner.go:130] > # gid_mappings = ""
	I0318 13:18:22.907205 1141442 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0318 13:18:22.907223 1141442 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 13:18:22.907299 1141442 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 13:18:22.907641 1141442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:18:22.907653 1141442 command_runner.go:130] > # minimum_mappable_uid = -1
	I0318 13:18:22.907663 1141442 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0318 13:18:22.907673 1141442 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0318 13:18:22.907688 1141442 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0318 13:18:22.907706 1141442 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0318 13:18:22.907713 1141442 command_runner.go:130] > # minimum_mappable_gid = -1
	I0318 13:18:22.907724 1141442 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0318 13:18:22.907738 1141442 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0318 13:18:22.907748 1141442 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0318 13:18:22.907757 1141442 command_runner.go:130] > # ctr_stop_timeout = 30
	I0318 13:18:22.907773 1141442 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0318 13:18:22.907786 1141442 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0318 13:18:22.907797 1141442 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0318 13:18:22.907809 1141442 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0318 13:18:22.907817 1141442 command_runner.go:130] > drop_infra_ctr = false
	I0318 13:18:22.907828 1141442 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0318 13:18:22.907841 1141442 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0318 13:18:22.907858 1141442 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0318 13:18:22.907870 1141442 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0318 13:18:22.907885 1141442 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0318 13:18:22.907899 1141442 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0318 13:18:22.907913 1141442 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0318 13:18:22.907926 1141442 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0318 13:18:22.907936 1141442 command_runner.go:130] > # shared_cpuset = ""
	I0318 13:18:22.907950 1141442 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0318 13:18:22.907961 1141442 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0318 13:18:22.907972 1141442 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0318 13:18:22.907988 1141442 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0318 13:18:22.907997 1141442 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0318 13:18:22.908010 1141442 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0318 13:18:22.908024 1141442 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0318 13:18:22.908036 1141442 command_runner.go:130] > # enable_criu_support = false
	I0318 13:18:22.908048 1141442 command_runner.go:130] > # Enable/disable the generation of the container,
	I0318 13:18:22.908062 1141442 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0318 13:18:22.908074 1141442 command_runner.go:130] > # enable_pod_events = false
	I0318 13:18:22.908084 1141442 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 13:18:22.908094 1141442 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0318 13:18:22.908099 1141442 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0318 13:18:22.908106 1141442 command_runner.go:130] > # default_runtime = "runc"
	I0318 13:18:22.908111 1141442 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0318 13:18:22.908121 1141442 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0318 13:18:22.908131 1141442 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0318 13:18:22.908138 1141442 command_runner.go:130] > # creation as a file is not desired either.
	I0318 13:18:22.908147 1141442 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0318 13:18:22.908158 1141442 command_runner.go:130] > # the hostname is being managed dynamically.
	I0318 13:18:22.908173 1141442 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0318 13:18:22.908182 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.908192 1141442 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0318 13:18:22.908206 1141442 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0318 13:18:22.908217 1141442 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0318 13:18:22.908223 1141442 command_runner.go:130] > # Each entry in the table should follow the format:
	I0318 13:18:22.908230 1141442 command_runner.go:130] > #
	I0318 13:18:22.908237 1141442 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0318 13:18:22.908253 1141442 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0318 13:18:22.908267 1141442 command_runner.go:130] > # runtime_type = "oci"
	I0318 13:18:22.908359 1141442 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0318 13:18:22.908377 1141442 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0318 13:18:22.908393 1141442 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0318 13:18:22.908410 1141442 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0318 13:18:22.908425 1141442 command_runner.go:130] > # monitor_env = []
	I0318 13:18:22.908439 1141442 command_runner.go:130] > # privileged_without_host_devices = false
	I0318 13:18:22.908456 1141442 command_runner.go:130] > # allowed_annotations = []
	I0318 13:18:22.908473 1141442 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0318 13:18:22.908487 1141442 command_runner.go:130] > # Where:
	I0318 13:18:22.908501 1141442 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0318 13:18:22.908523 1141442 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0318 13:18:22.908539 1141442 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0318 13:18:22.908552 1141442 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0318 13:18:22.908561 1141442 command_runner.go:130] > #   in $PATH.
	I0318 13:18:22.908572 1141442 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0318 13:18:22.908581 1141442 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0318 13:18:22.908589 1141442 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0318 13:18:22.908598 1141442 command_runner.go:130] > #   state.
	I0318 13:18:22.908609 1141442 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0318 13:18:22.908622 1141442 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0318 13:18:22.908636 1141442 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0318 13:18:22.908647 1141442 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0318 13:18:22.908660 1141442 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0318 13:18:22.908671 1141442 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0318 13:18:22.908680 1141442 command_runner.go:130] > #   The currently recognized values are:
	I0318 13:18:22.908693 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0318 13:18:22.908708 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0318 13:18:22.908722 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0318 13:18:22.908738 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0318 13:18:22.908754 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0318 13:18:22.908768 1141442 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0318 13:18:22.908778 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0318 13:18:22.908791 1141442 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0318 13:18:22.908805 1141442 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0318 13:18:22.908819 1141442 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0318 13:18:22.908830 1141442 command_runner.go:130] > #   deprecated option "conmon".
	I0318 13:18:22.908844 1141442 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0318 13:18:22.908856 1141442 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0318 13:18:22.908868 1141442 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0318 13:18:22.908876 1141442 command_runner.go:130] > #   should be moved to the container's cgroup
	I0318 13:18:22.908889 1141442 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0318 13:18:22.908901 1141442 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0318 13:18:22.908916 1141442 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0318 13:18:22.908928 1141442 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0318 13:18:22.908937 1141442 command_runner.go:130] > #
	I0318 13:18:22.908949 1141442 command_runner.go:130] > # Using the seccomp notifier feature:
	I0318 13:18:22.908956 1141442 command_runner.go:130] > #
	I0318 13:18:22.908967 1141442 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0318 13:18:22.908977 1141442 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0318 13:18:22.908985 1141442 command_runner.go:130] > #
	I0318 13:18:22.908999 1141442 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0318 13:18:22.909012 1141442 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0318 13:18:22.909021 1141442 command_runner.go:130] > #
	I0318 13:18:22.909034 1141442 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0318 13:18:22.909043 1141442 command_runner.go:130] > # feature.
	I0318 13:18:22.909051 1141442 command_runner.go:130] > #
	I0318 13:18:22.909062 1141442 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0318 13:18:22.909071 1141442 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0318 13:18:22.909084 1141442 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0318 13:18:22.909097 1141442 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0318 13:18:22.909111 1141442 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0318 13:18:22.909119 1141442 command_runner.go:130] > #
	I0318 13:18:22.909130 1141442 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0318 13:18:22.909143 1141442 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0318 13:18:22.909155 1141442 command_runner.go:130] > #
	I0318 13:18:22.909165 1141442 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0318 13:18:22.909176 1141442 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0318 13:18:22.909185 1141442 command_runner.go:130] > #
	I0318 13:18:22.909198 1141442 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0318 13:18:22.909211 1141442 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0318 13:18:22.909220 1141442 command_runner.go:130] > # limitation.
	I0318 13:18:22.909230 1141442 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0318 13:18:22.909240 1141442 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0318 13:18:22.909248 1141442 command_runner.go:130] > runtime_type = "oci"
	I0318 13:18:22.909255 1141442 command_runner.go:130] > runtime_root = "/run/runc"
	I0318 13:18:22.909262 1141442 command_runner.go:130] > runtime_config_path = ""
	I0318 13:18:22.909274 1141442 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0318 13:18:22.909284 1141442 command_runner.go:130] > monitor_cgroup = "pod"
	I0318 13:18:22.909294 1141442 command_runner.go:130] > monitor_exec_cgroup = ""
	I0318 13:18:22.909303 1141442 command_runner.go:130] > monitor_env = [
	I0318 13:18:22.909315 1141442 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0318 13:18:22.909323 1141442 command_runner.go:130] > ]
	I0318 13:18:22.909334 1141442 command_runner.go:130] > privileged_without_host_devices = false
	I0318 13:18:22.909343 1141442 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0318 13:18:22.909357 1141442 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0318 13:18:22.909370 1141442 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0318 13:18:22.909386 1141442 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0318 13:18:22.909401 1141442 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0318 13:18:22.909413 1141442 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0318 13:18:22.909431 1141442 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0318 13:18:22.909441 1141442 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0318 13:18:22.909453 1141442 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0318 13:18:22.909468 1141442 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0318 13:18:22.909485 1141442 command_runner.go:130] > # Example:
	I0318 13:18:22.909496 1141442 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0318 13:18:22.909507 1141442 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0318 13:18:22.909518 1141442 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0318 13:18:22.909529 1141442 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0318 13:18:22.909536 1141442 command_runner.go:130] > # cpuset = 0
	I0318 13:18:22.909541 1141442 command_runner.go:130] > # cpushares = "0-1"
	I0318 13:18:22.909545 1141442 command_runner.go:130] > # Where:
	I0318 13:18:22.909552 1141442 command_runner.go:130] > # The workload name is workload-type.
	I0318 13:18:22.909567 1141442 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0318 13:18:22.909577 1141442 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0318 13:18:22.909586 1141442 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0318 13:18:22.909599 1141442 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0318 13:18:22.909607 1141442 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0318 13:18:22.909615 1141442 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0318 13:18:22.909621 1141442 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0318 13:18:22.909625 1141442 command_runner.go:130] > # Default value is set to true
	I0318 13:18:22.909630 1141442 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0318 13:18:22.909642 1141442 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0318 13:18:22.909650 1141442 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0318 13:18:22.909658 1141442 command_runner.go:130] > # Default value is set to 'false'
	I0318 13:18:22.909664 1141442 command_runner.go:130] > # disable_hostport_mapping = false
	I0318 13:18:22.909674 1141442 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0318 13:18:22.909678 1141442 command_runner.go:130] > #
	I0318 13:18:22.909687 1141442 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0318 13:18:22.909697 1141442 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0318 13:18:22.909707 1141442 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0318 13:18:22.909715 1141442 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0318 13:18:22.909727 1141442 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0318 13:18:22.909736 1141442 command_runner.go:130] > [crio.image]
	I0318 13:18:22.909746 1141442 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0318 13:18:22.909756 1141442 command_runner.go:130] > # default_transport = "docker://"
	I0318 13:18:22.909769 1141442 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0318 13:18:22.909782 1141442 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0318 13:18:22.909791 1141442 command_runner.go:130] > # global_auth_file = ""
	I0318 13:18:22.909802 1141442 command_runner.go:130] > # The image used to instantiate infra containers.
	I0318 13:18:22.909816 1141442 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:18:22.909827 1141442 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0318 13:18:22.909841 1141442 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0318 13:18:22.909854 1141442 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0318 13:18:22.909865 1141442 command_runner.go:130] > # This option supports live configuration reload.
	I0318 13:18:22.909875 1141442 command_runner.go:130] > # pause_image_auth_file = ""
	I0318 13:18:22.909887 1141442 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0318 13:18:22.909899 1141442 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0318 13:18:22.909910 1141442 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0318 13:18:22.909925 1141442 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0318 13:18:22.909937 1141442 command_runner.go:130] > # pause_command = "/pause"
	I0318 13:18:22.909947 1141442 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0318 13:18:22.909960 1141442 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0318 13:18:22.909972 1141442 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0318 13:18:22.909984 1141442 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0318 13:18:22.909995 1141442 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0318 13:18:22.910005 1141442 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0318 13:18:22.910012 1141442 command_runner.go:130] > # pinned_images = [
	I0318 13:18:22.910021 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.910035 1141442 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0318 13:18:22.910048 1141442 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0318 13:18:22.910061 1141442 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0318 13:18:22.910074 1141442 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0318 13:18:22.910084 1141442 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0318 13:18:22.910094 1141442 command_runner.go:130] > # signature_policy = ""
	I0318 13:18:22.910103 1141442 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0318 13:18:22.910115 1141442 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0318 13:18:22.910129 1141442 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0318 13:18:22.910142 1141442 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0318 13:18:22.910154 1141442 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0318 13:18:22.910164 1141442 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0318 13:18:22.910177 1141442 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0318 13:18:22.910188 1141442 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0318 13:18:22.910195 1141442 command_runner.go:130] > # changing them here.
	I0318 13:18:22.910201 1141442 command_runner.go:130] > # insecure_registries = [
	I0318 13:18:22.910209 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.910232 1141442 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0318 13:18:22.910244 1141442 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0318 13:18:22.910253 1141442 command_runner.go:130] > # image_volumes = "mkdir"
	I0318 13:18:22.910264 1141442 command_runner.go:130] > # Temporary directory to use for storing big files
	I0318 13:18:22.910275 1141442 command_runner.go:130] > # big_files_temporary_dir = ""
	I0318 13:18:22.910287 1141442 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0318 13:18:22.910293 1141442 command_runner.go:130] > # CNI plugins.
	I0318 13:18:22.910298 1141442 command_runner.go:130] > [crio.network]
	I0318 13:18:22.910312 1141442 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0318 13:18:22.910324 1141442 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0318 13:18:22.910338 1141442 command_runner.go:130] > # cni_default_network = ""
	I0318 13:18:22.910355 1141442 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0318 13:18:22.910366 1141442 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0318 13:18:22.910378 1141442 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0318 13:18:22.910387 1141442 command_runner.go:130] > # plugin_dirs = [
	I0318 13:18:22.910394 1141442 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0318 13:18:22.910397 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.910406 1141442 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0318 13:18:22.910415 1141442 command_runner.go:130] > [crio.metrics]
	I0318 13:18:22.910424 1141442 command_runner.go:130] > # Globally enable or disable metrics support.
	I0318 13:18:22.910434 1141442 command_runner.go:130] > enable_metrics = true
	I0318 13:18:22.910445 1141442 command_runner.go:130] > # Specify enabled metrics collectors.
	I0318 13:18:22.910456 1141442 command_runner.go:130] > # Per default all metrics are enabled.
	I0318 13:18:22.910469 1141442 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0318 13:18:22.910481 1141442 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0318 13:18:22.910508 1141442 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0318 13:18:22.910530 1141442 command_runner.go:130] > # metrics_collectors = [
	I0318 13:18:22.910543 1141442 command_runner.go:130] > # 	"operations",
	I0318 13:18:22.910554 1141442 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0318 13:18:22.910564 1141442 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0318 13:18:22.910574 1141442 command_runner.go:130] > # 	"operations_errors",
	I0318 13:18:22.910584 1141442 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0318 13:18:22.910592 1141442 command_runner.go:130] > # 	"image_pulls_by_name",
	I0318 13:18:22.910601 1141442 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0318 13:18:22.910612 1141442 command_runner.go:130] > # 	"image_pulls_failures",
	I0318 13:18:22.910622 1141442 command_runner.go:130] > # 	"image_pulls_successes",
	I0318 13:18:22.910638 1141442 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0318 13:18:22.910648 1141442 command_runner.go:130] > # 	"image_layer_reuse",
	I0318 13:18:22.910662 1141442 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0318 13:18:22.910670 1141442 command_runner.go:130] > # 	"containers_oom_total",
	I0318 13:18:22.910677 1141442 command_runner.go:130] > # 	"containers_oom",
	I0318 13:18:22.910681 1141442 command_runner.go:130] > # 	"processes_defunct",
	I0318 13:18:22.910690 1141442 command_runner.go:130] > # 	"operations_total",
	I0318 13:18:22.910700 1141442 command_runner.go:130] > # 	"operations_latency_seconds",
	I0318 13:18:22.910711 1141442 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0318 13:18:22.910722 1141442 command_runner.go:130] > # 	"operations_errors_total",
	I0318 13:18:22.910732 1141442 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0318 13:18:22.910742 1141442 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0318 13:18:22.910752 1141442 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0318 13:18:22.910761 1141442 command_runner.go:130] > # 	"image_pulls_success_total",
	I0318 13:18:22.910769 1141442 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0318 13:18:22.910773 1141442 command_runner.go:130] > # 	"containers_oom_count_total",
	I0318 13:18:22.910783 1141442 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0318 13:18:22.910795 1141442 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0318 13:18:22.910800 1141442 command_runner.go:130] > # ]
	I0318 13:18:22.910813 1141442 command_runner.go:130] > # The port on which the metrics server will listen.
	I0318 13:18:22.910826 1141442 command_runner.go:130] > # metrics_port = 9090
	I0318 13:18:22.910837 1141442 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0318 13:18:22.910847 1141442 command_runner.go:130] > # metrics_socket = ""
	I0318 13:18:22.910857 1141442 command_runner.go:130] > # The certificate for the secure metrics server.
	I0318 13:18:22.910867 1141442 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0318 13:18:22.910879 1141442 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0318 13:18:22.910890 1141442 command_runner.go:130] > # certificate on any modification event.
	I0318 13:18:22.910900 1141442 command_runner.go:130] > # metrics_cert = ""
	I0318 13:18:22.910911 1141442 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0318 13:18:22.910921 1141442 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0318 13:18:22.910931 1141442 command_runner.go:130] > # metrics_key = ""
	I0318 13:18:22.910943 1141442 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0318 13:18:22.910952 1141442 command_runner.go:130] > [crio.tracing]
	I0318 13:18:22.910960 1141442 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0318 13:18:22.910966 1141442 command_runner.go:130] > # enable_tracing = false
	I0318 13:18:22.910978 1141442 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0318 13:18:22.910995 1141442 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0318 13:18:22.911009 1141442 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0318 13:18:22.911019 1141442 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0318 13:18:22.911030 1141442 command_runner.go:130] > # CRI-O NRI configuration.
	I0318 13:18:22.911037 1141442 command_runner.go:130] > [crio.nri]
	I0318 13:18:22.911042 1141442 command_runner.go:130] > # Globally enable or disable NRI.
	I0318 13:18:22.911050 1141442 command_runner.go:130] > # enable_nri = false
	I0318 13:18:22.911060 1141442 command_runner.go:130] > # NRI socket to listen on.
	I0318 13:18:22.911071 1141442 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0318 13:18:22.911078 1141442 command_runner.go:130] > # NRI plugin directory to use.
	I0318 13:18:22.911089 1141442 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0318 13:18:22.911100 1141442 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0318 13:18:22.911110 1141442 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0318 13:18:22.911122 1141442 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0318 13:18:22.911132 1141442 command_runner.go:130] > # nri_disable_connections = false
	I0318 13:18:22.911140 1141442 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0318 13:18:22.911149 1141442 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0318 13:18:22.911160 1141442 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0318 13:18:22.911171 1141442 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0318 13:18:22.911181 1141442 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0318 13:18:22.911191 1141442 command_runner.go:130] > [crio.stats]
	I0318 13:18:22.911203 1141442 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0318 13:18:22.911214 1141442 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0318 13:18:22.911224 1141442 command_runner.go:130] > # stats_collection_period = 0
	I0318 13:18:22.911266 1141442 command_runner.go:130] ! time="2024-03-18 13:18:22.872028976Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0318 13:18:22.911292 1141442 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0318 13:18:22.911429 1141442 cni.go:84] Creating CNI manager for ""
	I0318 13:18:22.911440 1141442 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0318 13:18:22.911449 1141442 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:18:22.911477 1141442 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.156 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-229365 NodeName:multinode-229365 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.156"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.156 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:18:22.911638 1141442 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.156
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-229365"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.156
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.156"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:18:22.911710 1141442 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:18:22.923606 1141442 command_runner.go:130] > kubeadm
	I0318 13:18:22.923620 1141442 command_runner.go:130] > kubectl
	I0318 13:18:22.923624 1141442 command_runner.go:130] > kubelet
	I0318 13:18:22.924089 1141442 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:18:22.924142 1141442 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:18:22.934261 1141442 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0318 13:18:22.952947 1141442 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:18:22.975239 1141442 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0318 13:18:22.996284 1141442 ssh_runner.go:195] Run: grep 192.168.39.156	control-plane.minikube.internal$ /etc/hosts
	I0318 13:18:23.000627 1141442 command_runner.go:130] > 192.168.39.156	control-plane.minikube.internal
	I0318 13:18:23.000854 1141442 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:18:23.147702 1141442 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:18:23.164928 1141442 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365 for IP: 192.168.39.156
	I0318 13:18:23.164952 1141442 certs.go:194] generating shared ca certs ...
	I0318 13:18:23.164968 1141442 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:18:23.165162 1141442 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:18:23.165216 1141442 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:18:23.165229 1141442 certs.go:256] generating profile certs ...
	I0318 13:18:23.165332 1141442 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/client.key
	I0318 13:18:23.165432 1141442 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/apiserver.key.963b5288
	I0318 13:18:23.165486 1141442 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/proxy-client.key
	I0318 13:18:23.165502 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0318 13:18:23.165519 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0318 13:18:23.165538 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0318 13:18:23.165555 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0318 13:18:23.165573 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0318 13:18:23.165589 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0318 13:18:23.165608 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0318 13:18:23.165627 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0318 13:18:23.165707 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:18:23.165749 1141442 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:18:23.165763 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:18:23.165794 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:18:23.165826 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:18:23.165867 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:18:23.165917 1141442 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:18:23.165957 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> /usr/share/ca-certificates/11141362.pem
	I0318 13:18:23.165977 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:18:23.166004 1141442 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem -> /usr/share/ca-certificates/1114136.pem
	I0318 13:18:23.166669 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:18:23.192381 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:18:23.220040 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:18:23.246356 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:18:23.273294 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:18:23.299722 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:18:23.336303 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:18:23.363652 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/multinode-229365/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:18:23.390593 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:18:23.418102 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:18:23.445242 1141442 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:18:23.472730 1141442 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:18:23.491380 1141442 ssh_runner.go:195] Run: openssl version
	I0318 13:18:23.497983 1141442 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0318 13:18:23.498074 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:18:23.509687 1141442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:18:23.514864 1141442 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:18:23.514896 1141442 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:18:23.514938 1141442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:18:23.521217 1141442 command_runner.go:130] > 3ec20f2e
	I0318 13:18:23.521282 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:18:23.531306 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:18:23.543008 1141442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:18:23.548029 1141442 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:18:23.548119 1141442 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:18:23.548164 1141442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:18:23.554112 1141442 command_runner.go:130] > b5213941
	I0318 13:18:23.554283 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:18:23.565529 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:18:23.577276 1141442 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:18:23.582139 1141442 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:18:23.582320 1141442 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:18:23.582370 1141442 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:18:23.588600 1141442 command_runner.go:130] > 51391683
	I0318 13:18:23.588659 1141442 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:18:23.598604 1141442 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:18:23.603685 1141442 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:18:23.603707 1141442 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0318 13:18:23.603713 1141442 command_runner.go:130] > Device: 253,1	Inode: 8385597     Links: 1
	I0318 13:18:23.603720 1141442 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0318 13:18:23.603731 1141442 command_runner.go:130] > Access: 2024-03-18 13:11:52.858908190 +0000
	I0318 13:18:23.603743 1141442 command_runner.go:130] > Modify: 2024-03-18 13:11:52.858908190 +0000
	I0318 13:18:23.603752 1141442 command_runner.go:130] > Change: 2024-03-18 13:11:52.858908190 +0000
	I0318 13:18:23.603763 1141442 command_runner.go:130] >  Birth: 2024-03-18 13:11:52.858908190 +0000
	I0318 13:18:23.603808 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:18:23.610063 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.610110 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:18:23.616035 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.616223 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:18:23.622125 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.622275 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:18:23.628581 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.628782 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:18:23.634703 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.635047 1141442 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:18:23.640862 1141442 command_runner.go:130] > Certificate will not expire
	I0318 13:18:23.641101 1141442 kubeadm.go:391] StartCluster: {Name:multinode-229365 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-229365 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.156 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.29 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.34 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:18:23.641260 1141442 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:18:23.641310 1141442 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:18:23.680206 1141442 command_runner.go:130] > b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a
	I0318 13:18:23.680248 1141442 command_runner.go:130] > 983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab
	I0318 13:18:23.680257 1141442 command_runner.go:130] > 1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b
	I0318 13:18:23.680267 1141442 command_runner.go:130] > e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e
	I0318 13:18:23.680274 1141442 command_runner.go:130] > be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875
	I0318 13:18:23.680280 1141442 command_runner.go:130] > 2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536
	I0318 13:18:23.680284 1141442 command_runner.go:130] > 19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26
	I0318 13:18:23.680291 1141442 command_runner.go:130] > 07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1
	I0318 13:18:23.680319 1141442 cri.go:89] found id: "b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a"
	I0318 13:18:23.680341 1141442 cri.go:89] found id: "983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab"
	I0318 13:18:23.680347 1141442 cri.go:89] found id: "1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b"
	I0318 13:18:23.680353 1141442 cri.go:89] found id: "e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e"
	I0318 13:18:23.680361 1141442 cri.go:89] found id: "be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875"
	I0318 13:18:23.680365 1141442 cri.go:89] found id: "2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536"
	I0318 13:18:23.680368 1141442 cri.go:89] found id: "19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26"
	I0318 13:18:23.680371 1141442 cri.go:89] found id: "07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1"
	I0318 13:18:23.680373 1141442 cri.go:89] found id: ""
	I0318 13:18:23.680416 1141442 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.741199048Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3c42992b-fe38-4159-b294-4e88970a699f name=/runtime.v1.RuntimeService/Version
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.742407986Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3d7eafb8-a3c1-4692-bf32-58df077fd699 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.742892203Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768134742857728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3d7eafb8-a3c1-4692-bf32-58df077fd699 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.743488652Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=92df7c0b-853d-4205-971e-15608e97b41d name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.743541943Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=92df7c0b-853d-4205-971e-15608e97b41d name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.747190752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16c2068522aadd44d069b85237c1ecc8b4aa99c5f143694257bb68071e7967b9,PodSandboxId:52bcdb9819669cf60e69440a744d314a8bc62735238358f2d62b59c9020a78d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767943364002170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79bb5b5fff7b50b0330cb014a1df22e84215a782ada7bc3fa6966a0c064f000,PodSandboxId:aa84ed1648c5e9b1bb11ded414f0f616785af4a62fa8b96211146138b7f6f385,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767909804414389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f87f713de60a9fb094193e213dd7121bd45eacef190e08fa49305b85efca22,PodSandboxId:3a6e62bea40f4ea24d7b82d069673eadf043c3f3b591d38308f2e459f1dbf1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767909726012946,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae20eb2a607f0ee926bf3b398451e16c7fa85914d2376affeb61811e29d664e9,PodSandboxId:a78d906e90628ab24920dce8b71512eaa1d236a238f3524aa3485e08904fa4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767909601462018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},A
nnotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151dba6a079a7fdc667d950d8f470b7279ae09ee891e6b0aa1c49a5bfd50ad51,PodSandboxId:fefe9ea6b3d56f26b9ae0847168a70c2da186316d7f72cf42c0161ba4a77be38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767909552436280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.k
ubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84cd5e0c1d2fd1fecc58b212fc8ad4cfbcc77bc38679000d8a6752bc84f8db10,PodSandboxId:dbe69e40c879cd2c1e4d24f5cd826a5c498e0c68a30660925eb4e8eef2374cbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767905920326537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c622392,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dceda195e3c4ef42a31a2a4418cb8dd7c5f9c0198cac9073992229ffde86404,PodSandboxId:66a24839549dc3421257e5b9f62ae1bc589abffe5a395f824ef9eeee21331c32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767905717294793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909de68b9ffeb63cdc62969865653b738c51939e0f04763d3dc44ecac47ca541,PodSandboxId:860e20b56d1446198f88c8ea833b4f21133e32a9b08861754033cb7e0ede6ba2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767905744930362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc67fccd3fe04505303c6c10812d6cf0de31a255788d0152635133cf2e7b60d,PodSandboxId:1610ef2a2e41b3b61bb2b6ce7dd86964c4ce2ff3bc438c4c0a9ae15bab1c833c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767905703402118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422b25901a04956d84fb11b8f915766e21ccd6584b13e2c464d9de34b34be634,PodSandboxId:bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767590513172477,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a,PodSandboxId:54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767543166488184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab,PodSandboxId:2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767541637434678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},Annotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b,PodSandboxId:f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710767539787396490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e,PodSandboxId:f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710767535975994605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875,PodSandboxId:74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710767516231094554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536,PodSandboxId:22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710767516219506119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35
a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26,PodSandboxId:c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710767516192368029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c62239
2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1,PodSandboxId:c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710767516081144223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations
:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=92df7c0b-853d-4205-971e-15608e97b41d name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.804670873Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d1a4c09-6144-45c9-996f-0b212e35a1e3 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.804745542Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d1a4c09-6144-45c9-996f-0b212e35a1e3 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.806620845Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a7fd0f92-5fea-4dc3-b815-aba62c238adf name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.806724834Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=87e7d881-40a3-4fd4-b698-619cc290be50 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.807685634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768134807659236,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87e7d881-40a3-4fd4-b698-619cc290be50 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.807913783Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:52bcdb9819669cf60e69440a744d314a8bc62735238358f2d62b59c9020a78d3,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-cc5z6,Uid:e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710767943176081525,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:18:28.990269996Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3a6e62bea40f4ea24d7b82d069673eadf043c3f3b591d38308f2e459f1dbf1f2,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-c6dnv,Uid:3c6e16db-16e4-468f-919c-df4c54cf0e94,Namespace:kube-system,Attempt:
1,},State:SANDBOX_READY,CreatedAt:1710767909408874419,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:18:28.990271097Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aa84ed1648c5e9b1bb11ded414f0f616785af4a62fa8b96211146138b7f6f385,Metadata:&PodSandboxMetadata{Name:kindnet-xcffd,Uid:a92bfa0e-6f47-44a9-a32c-9628f567e5bc,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710767909351309683,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:m
ap[string]string{kubernetes.io/config.seen: 2024-03-18T13:18:28.990266017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fefe9ea6b3d56f26b9ae0847168a70c2da186316d7f72cf42c0161ba4a77be38,Metadata:&PodSandboxMetadata{Name:kube-proxy-vdnsn,Uid:6e762a2b-2d25-4f3e-8860-192c60a97ad8,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710767909336741577,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:18:28.990268583Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a78d906e90628ab24920dce8b71512eaa1d236a238f3524aa3485e08904fa4bb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e9702ef6-2066-470d-a8c9-d0857dc8b63a,Namespace:kube-system,Attempt:1,},Sta
te:SANDBOX_READY,CreatedAt:1710767909330495640,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/t
mp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T13:18:28.990261869Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1610ef2a2e41b3b61bb2b6ce7dd86964c4ce2ff3bc438c4c0a9ae15bab1c833c,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-229365,Uid:66e2564c3f5ce1cdf5c73a3d12c95511,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710767905488906846,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.156:8443,kubernetes.io/config.hash: 66e2564c3f5ce1cdf5c73a3d12c95511,kubernetes.io/config.seen: 2024-03-18T13:18:24.990250615Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:dbe69e40c879cd2c1e4d24f5cd826a5c
498e0c68a30660925eb4e8eef2374cbb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-229365,Uid:47b9b389eeab8ea23a39be0a8c622392,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710767905488214844,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c622392,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 47b9b389eeab8ea23a39be0a8c622392,kubernetes.io/config.seen: 2024-03-18T13:18:24.990259387Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:860e20b56d1446198f88c8ea833b4f21133e32a9b08861754033cb7e0ede6ba2,Metadata:&PodSandboxMetadata{Name:etcd-multinode-229365,Uid:326c3bfa26902a35a907a995f7624593,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710767905476884457,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kuberne
tes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35a907a995f7624593,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.156:2379,kubernetes.io/config.hash: 326c3bfa26902a35a907a995f7624593,kubernetes.io/config.seen: 2024-03-18T13:18:24.990246510Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:66a24839549dc3421257e5b9f62ae1bc589abffe5a395f824ef9eeee21331c32,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-229365,Uid:131ed49275b5405a33eedc6996906d41,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710767905470367182,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,tier: control-plane,},Annotations:map[string]string{kuber
netes.io/config.hash: 131ed49275b5405a33eedc6996906d41,kubernetes.io/config.seen: 2024-03-18T13:18:24.990251768Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-cc5z6,Uid:e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710767587824957478,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:13:07.516732003Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-c6dnv,Uid:3c6e16db-16e4-468f-919c-df4c54cf0e94,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710767543021053610,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:12:21.209250737Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e9702ef6-2066-470d-a8c9-d0857dc8b63a,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710767541527160740,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},Annotations
:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T13:12:21.215334077Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b,Metadata:&PodSandboxMetadata{Name:kindnet-xcffd,Uid:a92bfa0e-6f47-44a9-a32c-9628f567e5bc,Namespace:kub
e-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710767535635007370,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:12:15.323256249Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15,Metadata:&PodSandboxMetadata{Name:kube-proxy-vdnsn,Uid:6e762a2b-2d25-4f3e-8860-192c60a97ad8,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710767535609682793,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,k8s-app:
kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:12:15.297240106Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-229365,Uid:131ed49275b5405a33eedc6996906d41,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710767515951663738,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 131ed49275b5405a33eedc6996906d41,kubernetes.io/config.seen: 2024-03-18T13:11:55.465447461Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960,Met
adata:&PodSandboxMetadata{Name:etcd-multinode-229365,Uid:326c3bfa26902a35a907a995f7624593,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710767515936666355,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35a907a995f7624593,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.156:2379,kubernetes.io/config.hash: 326c3bfa26902a35a907a995f7624593,kubernetes.io/config.seen: 2024-03-18T13:11:55.465449405Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-229365,Uid:47b9b389eeab8ea23a39be0a8c622392,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710767515933938945,Labels:map[string]string{component: kube-scheduler,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c622392,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 47b9b389eeab8ea23a39be0a8c622392,kubernetes.io/config.seen: 2024-03-18T13:11:55.465448508Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-229365,Uid:66e2564c3f5ce1cdf5c73a3d12c95511,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710767515931773001,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endp
oint: 192.168.39.156:8443,kubernetes.io/config.hash: 66e2564c3f5ce1cdf5c73a3d12c95511,kubernetes.io/config.seen: 2024-03-18T13:11:55.465443103Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a7fd0f92-5fea-4dc3-b815-aba62c238adf name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.809084862Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14e0db8d-acf7-4c5a-8f35-a939642e34ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.809137209Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14e0db8d-acf7-4c5a-8f35-a939642e34ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.809506152Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16c2068522aadd44d069b85237c1ecc8b4aa99c5f143694257bb68071e7967b9,PodSandboxId:52bcdb9819669cf60e69440a744d314a8bc62735238358f2d62b59c9020a78d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767943364002170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79bb5b5fff7b50b0330cb014a1df22e84215a782ada7bc3fa6966a0c064f000,PodSandboxId:aa84ed1648c5e9b1bb11ded414f0f616785af4a62fa8b96211146138b7f6f385,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767909804414389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f87f713de60a9fb094193e213dd7121bd45eacef190e08fa49305b85efca22,PodSandboxId:3a6e62bea40f4ea24d7b82d069673eadf043c3f3b591d38308f2e459f1dbf1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767909726012946,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae20eb2a607f0ee926bf3b398451e16c7fa85914d2376affeb61811e29d664e9,PodSandboxId:a78d906e90628ab24920dce8b71512eaa1d236a238f3524aa3485e08904fa4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767909601462018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},A
nnotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151dba6a079a7fdc667d950d8f470b7279ae09ee891e6b0aa1c49a5bfd50ad51,PodSandboxId:fefe9ea6b3d56f26b9ae0847168a70c2da186316d7f72cf42c0161ba4a77be38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767909552436280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.k
ubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84cd5e0c1d2fd1fecc58b212fc8ad4cfbcc77bc38679000d8a6752bc84f8db10,PodSandboxId:dbe69e40c879cd2c1e4d24f5cd826a5c498e0c68a30660925eb4e8eef2374cbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767905920326537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c622392,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dceda195e3c4ef42a31a2a4418cb8dd7c5f9c0198cac9073992229ffde86404,PodSandboxId:66a24839549dc3421257e5b9f62ae1bc589abffe5a395f824ef9eeee21331c32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767905717294793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909de68b9ffeb63cdc62969865653b738c51939e0f04763d3dc44ecac47ca541,PodSandboxId:860e20b56d1446198f88c8ea833b4f21133e32a9b08861754033cb7e0ede6ba2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767905744930362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc67fccd3fe04505303c6c10812d6cf0de31a255788d0152635133cf2e7b60d,PodSandboxId:1610ef2a2e41b3b61bb2b6ce7dd86964c4ce2ff3bc438c4c0a9ae15bab1c833c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767905703402118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422b25901a04956d84fb11b8f915766e21ccd6584b13e2c464d9de34b34be634,PodSandboxId:bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767590513172477,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a,PodSandboxId:54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767543166488184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab,PodSandboxId:2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767541637434678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},Annotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b,PodSandboxId:f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710767539787396490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e,PodSandboxId:f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710767535975994605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875,PodSandboxId:74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710767516231094554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536,PodSandboxId:22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710767516219506119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35
a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26,PodSandboxId:c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710767516192368029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c62239
2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1,PodSandboxId:c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710767516081144223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations
:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=14e0db8d-acf7-4c5a-8f35-a939642e34ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.809956180Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f4f76ab-df0f-4861-bf3e-71c9d717e221 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.810000236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f4f76ab-df0f-4861-bf3e-71c9d717e221 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.810311915Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16c2068522aadd44d069b85237c1ecc8b4aa99c5f143694257bb68071e7967b9,PodSandboxId:52bcdb9819669cf60e69440a744d314a8bc62735238358f2d62b59c9020a78d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767943364002170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79bb5b5fff7b50b0330cb014a1df22e84215a782ada7bc3fa6966a0c064f000,PodSandboxId:aa84ed1648c5e9b1bb11ded414f0f616785af4a62fa8b96211146138b7f6f385,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767909804414389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f87f713de60a9fb094193e213dd7121bd45eacef190e08fa49305b85efca22,PodSandboxId:3a6e62bea40f4ea24d7b82d069673eadf043c3f3b591d38308f2e459f1dbf1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767909726012946,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae20eb2a607f0ee926bf3b398451e16c7fa85914d2376affeb61811e29d664e9,PodSandboxId:a78d906e90628ab24920dce8b71512eaa1d236a238f3524aa3485e08904fa4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767909601462018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},A
nnotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151dba6a079a7fdc667d950d8f470b7279ae09ee891e6b0aa1c49a5bfd50ad51,PodSandboxId:fefe9ea6b3d56f26b9ae0847168a70c2da186316d7f72cf42c0161ba4a77be38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767909552436280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.k
ubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84cd5e0c1d2fd1fecc58b212fc8ad4cfbcc77bc38679000d8a6752bc84f8db10,PodSandboxId:dbe69e40c879cd2c1e4d24f5cd826a5c498e0c68a30660925eb4e8eef2374cbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767905920326537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c622392,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dceda195e3c4ef42a31a2a4418cb8dd7c5f9c0198cac9073992229ffde86404,PodSandboxId:66a24839549dc3421257e5b9f62ae1bc589abffe5a395f824ef9eeee21331c32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767905717294793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909de68b9ffeb63cdc62969865653b738c51939e0f04763d3dc44ecac47ca541,PodSandboxId:860e20b56d1446198f88c8ea833b4f21133e32a9b08861754033cb7e0ede6ba2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767905744930362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc67fccd3fe04505303c6c10812d6cf0de31a255788d0152635133cf2e7b60d,PodSandboxId:1610ef2a2e41b3b61bb2b6ce7dd86964c4ce2ff3bc438c4c0a9ae15bab1c833c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767905703402118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422b25901a04956d84fb11b8f915766e21ccd6584b13e2c464d9de34b34be634,PodSandboxId:bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767590513172477,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a,PodSandboxId:54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767543166488184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab,PodSandboxId:2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767541637434678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},Annotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b,PodSandboxId:f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710767539787396490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e,PodSandboxId:f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710767535975994605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875,PodSandboxId:74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710767516231094554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536,PodSandboxId:22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710767516219506119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35
a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26,PodSandboxId:c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710767516192368029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c62239
2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1,PodSandboxId:c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710767516081144223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations
:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f4f76ab-df0f-4861-bf3e-71c9d717e221 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.870355079Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4c20dc3-7cad-433a-ab1b-38d33275961c name=/runtime.v1.RuntimeService/Version
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.870463799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4c20dc3-7cad-433a-ab1b-38d33275961c name=/runtime.v1.RuntimeService/Version
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.871885247Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7f9eb3c0-be29-4825-9d69-d1cdcd961cc7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.872727998Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768134872705448,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7f9eb3c0-be29-4825-9d69-d1cdcd961cc7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.873521440Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af0d44d0-aee5-48bb-b326-0cd670d16802 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.873652084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af0d44d0-aee5-48bb-b326-0cd670d16802 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:22:14 multinode-229365 crio[2872]: time="2024-03-18 13:22:14.874098163Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:16c2068522aadd44d069b85237c1ecc8b4aa99c5f143694257bb68071e7967b9,PodSandboxId:52bcdb9819669cf60e69440a744d314a8bc62735238358f2d62b59c9020a78d3,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710767943364002170,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b79bb5b5fff7b50b0330cb014a1df22e84215a782ada7bc3fa6966a0c064f000,PodSandboxId:aa84ed1648c5e9b1bb11ded414f0f616785af4a62fa8b96211146138b7f6f385,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710767909804414389,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a7f87f713de60a9fb094193e213dd7121bd45eacef190e08fa49305b85efca22,PodSandboxId:3a6e62bea40f4ea24d7b82d069673eadf043c3f3b591d38308f2e459f1dbf1f2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710767909726012946,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae20eb2a607f0ee926bf3b398451e16c7fa85914d2376affeb61811e29d664e9,PodSandboxId:a78d906e90628ab24920dce8b71512eaa1d236a238f3524aa3485e08904fa4bb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710767909601462018,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},A
nnotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:151dba6a079a7fdc667d950d8f470b7279ae09ee891e6b0aa1c49a5bfd50ad51,PodSandboxId:fefe9ea6b3d56f26b9ae0847168a70c2da186316d7f72cf42c0161ba4a77be38,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710767909552436280,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.k
ubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84cd5e0c1d2fd1fecc58b212fc8ad4cfbcc77bc38679000d8a6752bc84f8db10,PodSandboxId:dbe69e40c879cd2c1e4d24f5cd826a5c498e0c68a30660925eb4e8eef2374cbb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710767905920326537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c622392,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dceda195e3c4ef42a31a2a4418cb8dd7c5f9c0198cac9073992229ffde86404,PodSandboxId:66a24839549dc3421257e5b9f62ae1bc589abffe5a395f824ef9eeee21331c32,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710767905717294793,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909de68b9ffeb63cdc62969865653b738c51939e0f04763d3dc44ecac47ca541,PodSandboxId:860e20b56d1446198f88c8ea833b4f21133e32a9b08861754033cb7e0ede6ba2,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710767905744930362,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kuberne
tes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dc67fccd3fe04505303c6c10812d6cf0de31a255788d0152635133cf2e7b60d,PodSandboxId:1610ef2a2e41b3b61bb2b6ce7dd86964c4ce2ff3bc438c4c0a9ae15bab1c833c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710767905703402118,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:422b25901a04956d84fb11b8f915766e21ccd6584b13e2c464d9de34b34be634,PodSandboxId:bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710767590513172477,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-cc5z6,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d,},Annotations:map[string]string{io.kubernetes.container.hash: d237e23b,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a,PodSandboxId:54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710767543166488184,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-c6dnv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3c6e16db-16e4-468f-919c-df4c54cf0e94,},Annotations:map[string]string{io.kubernetes.container.hash: 18cee1b5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:983530ca0390fdf5130f74c9b63f92d0fa66ed5b2294b626a39470023f12d2ab,PodSandboxId:2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710767541637434678,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: e9702ef6-2066-470d-a8c9-d0857dc8b63a,},Annotations:map[string]string{io.kubernetes.container.hash: 598930d6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b,PodSandboxId:f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710767539787396490,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-xcffd,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a92bfa0e-6f47-44a9-a32c-9628f567e5bc,},Annotations:map[string]string{io.kubernetes.container.hash: ae615961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e,PodSandboxId:f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710767535975994605,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-vdnsn,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 6e762a2b-2d25-4f3e-8860-192c60a97ad8,},Annotations:map[string]string{io.kubernetes.container.hash: 338e6394,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875,PodSandboxId:74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710767516231094554,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-229365,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 131ed49275b5405a33eedc6996906d41,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536,PodSandboxId:22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710767516219506119,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 326c3bfa26902a35
a907a995f7624593,},Annotations:map[string]string{io.kubernetes.container.hash: 36c66f2d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26,PodSandboxId:c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710767516192368029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47b9b389eeab8ea23a39be0a8c62239
2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1,PodSandboxId:c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710767516081144223,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-229365,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 66e2564c3f5ce1cdf5c73a3d12c95511,},Annotations
:map[string]string{io.kubernetes.container.hash: 9a114cb2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af0d44d0-aee5-48bb-b326-0cd670d16802 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	16c2068522aad       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   52bcdb9819669       busybox-5b5d89c9d6-cc5z6
	b79bb5b5fff7b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   aa84ed1648c5e       kindnet-xcffd
	a7f87f713de60       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   3a6e62bea40f4       coredns-5dd5756b68-c6dnv
	ae20eb2a607f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   a78d906e90628       storage-provisioner
	151dba6a079a7       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   fefe9ea6b3d56       kube-proxy-vdnsn
	84cd5e0c1d2fd       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   dbe69e40c879c       kube-scheduler-multinode-229365
	909de68b9ffeb       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   860e20b56d144       etcd-multinode-229365
	2dceda195e3c4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   66a24839549dc       kube-controller-manager-multinode-229365
	8dc67fccd3fe0       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   1610ef2a2e41b       kube-apiserver-multinode-229365
	422b25901a049       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   9 minutes ago       Exited              busybox                   0                   bcf600d198b03       busybox-5b5d89c9d6-cc5z6
	b7d6113ae413d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      9 minutes ago       Exited              coredns                   0                   54beb227a8ffc       coredns-5dd5756b68-c6dnv
	983530ca0390f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   2b91079d0b985       storage-provisioner
	1ed0bf4243c86       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    9 minutes ago       Exited              kindnet-cni               0                   f26b8c9a42276       kindnet-xcffd
	e592bd1c5e3a4       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      9 minutes ago       Exited              kube-proxy                0                   f1816dbdb1063       kube-proxy-vdnsn
	be9f7f6d2ab2c       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      10 minutes ago      Exited              kube-controller-manager   0                   74a85ae0bbe5a       kube-controller-manager-multinode-229365
	2f3ec2a2b2ec3       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      10 minutes ago      Exited              etcd                      0                   22b007523ac25       etcd-multinode-229365
	19006263151a3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      10 minutes ago      Exited              kube-scheduler            0                   c1cafd55af3f2       kube-scheduler-multinode-229365
	07436321da95f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      10 minutes ago      Exited              kube-apiserver            0                   c9f7b9a897712       kube-apiserver-multinode-229365
	
	
	==> coredns [a7f87f713de60a9fb094193e213dd7121bd45eacef190e08fa49305b85efca22] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39283 - 1170 "HINFO IN 8233788883777066174.2102812111472599324. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015882894s
	
	
	==> coredns [b7d6113ae413de49195180cfe7d52eca2dc3e2cfe0e7c67300040abb9a92471a] <==
	[INFO] 10.244.1.2:47054 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001851632s
	[INFO] 10.244.1.2:43447 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000151811s
	[INFO] 10.244.1.2:48611 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000105634s
	[INFO] 10.244.1.2:53643 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001681384s
	[INFO] 10.244.1.2:58877 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000140424s
	[INFO] 10.244.1.2:46078 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000123455s
	[INFO] 10.244.1.2:35238 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00013228s
	[INFO] 10.244.0.3:35340 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102391s
	[INFO] 10.244.0.3:46548 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00010052s
	[INFO] 10.244.0.3:41780 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000072991s
	[INFO] 10.244.0.3:35378 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000270011s
	[INFO] 10.244.1.2:52811 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000175213s
	[INFO] 10.244.1.2:50087 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015151s
	[INFO] 10.244.1.2:48793 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000169883s
	[INFO] 10.244.1.2:45004 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074448s
	[INFO] 10.244.0.3:51339 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114263s
	[INFO] 10.244.0.3:37530 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000129803s
	[INFO] 10.244.0.3:49083 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000133048s
	[INFO] 10.244.0.3:36434 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093106s
	[INFO] 10.244.1.2:42947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000117709s
	[INFO] 10.244.1.2:52006 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097564s
	[INFO] 10.244.1.2:49270 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000094592s
	[INFO] 10.244.1.2:44667 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000247057s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-229365
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-229365
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=multinode-229365
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_12_03_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:11:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-229365
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:22:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:18:28 +0000   Mon, 18 Mar 2024 13:11:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:18:28 +0000   Mon, 18 Mar 2024 13:11:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:18:28 +0000   Mon, 18 Mar 2024 13:11:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:18:28 +0000   Mon, 18 Mar 2024 13:12:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    multinode-229365
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 14c11cba73134b11abfeb410fbee10f1
	  System UUID:                14c11cba-7313-4b11-abfe-b410fbee10f1
	  Boot ID:                    6c85392c-0c28-4837-8562-81688e187c36
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-cc5z6                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 coredns-5dd5756b68-c6dnv                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                 etcd-multinode-229365                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-xcffd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-apiserver-multinode-229365             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-229365    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-vdnsn                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-scheduler-multinode-229365             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m58s                  kube-proxy       
	  Normal  Starting                 3m45s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-229365 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-229365 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-229365 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node multinode-229365 event: Registered Node multinode-229365 in Controller
	  Normal  NodeReady                9m54s                  kubelet          Node multinode-229365 status is now: NodeReady
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m50s)  kubelet          Node multinode-229365 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m50s)  kubelet          Node multinode-229365 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m50s)  kubelet          Node multinode-229365 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m50s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m34s                  node-controller  Node multinode-229365 event: Registered Node multinode-229365 in Controller
	
	
	Name:               multinode-229365-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-229365-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=multinode-229365
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_18T13_19_09_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:19:08 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-229365-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:19:49 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 18 Mar 2024 13:19:39 +0000   Mon, 18 Mar 2024 13:20:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 18 Mar 2024 13:19:39 +0000   Mon, 18 Mar 2024 13:20:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 18 Mar 2024 13:19:39 +0000   Mon, 18 Mar 2024 13:20:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 18 Mar 2024 13:19:39 +0000   Mon, 18 Mar 2024 13:20:31 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.29
	  Hostname:    multinode-229365-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 cebf78e8a3c8406ea493310af8f889fb
	  System UUID:                cebf78e8-a3c8-406e-a493-310af8f889fb
	  Boot ID:                    e8100713-4525-4820-8b50-52b8c858acd6
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-q6bt8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m12s
	  kube-system                 kindnet-jmf7p               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m19s
	  kube-system                 kube-proxy-ll5m7            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m15s                  kube-proxy       
	  Normal  Starting                 3m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m19s (x5 over 9m21s)  kubelet          Node multinode-229365-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s (x5 over 9m21s)  kubelet          Node multinode-229365-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s (x5 over 9m21s)  kubelet          Node multinode-229365-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m11s                  kubelet          Node multinode-229365-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m7s (x5 over 3m8s)    kubelet          Node multinode-229365-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x5 over 3m8s)    kubelet          Node multinode-229365-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x5 over 3m8s)    kubelet          Node multinode-229365-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                2m59s                  kubelet          Node multinode-229365-m02 status is now: NodeReady
	  Normal  NodeNotReady             104s                   node-controller  Node multinode-229365-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.065342] systemd-fstab-generator[614]: Ignoring "noauto" option for root device
	[  +0.206634] systemd-fstab-generator[628]: Ignoring "noauto" option for root device
	[  +0.143052] systemd-fstab-generator[640]: Ignoring "noauto" option for root device
	[  +0.264670] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +5.316178] systemd-fstab-generator[764]: Ignoring "noauto" option for root device
	[  +0.061646] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.062123] systemd-fstab-generator[939]: Ignoring "noauto" option for root device
	[  +1.103138] kauditd_printk_skb: 57 callbacks suppressed
	[Mar18 13:12] systemd-fstab-generator[1270]: Ignoring "noauto" option for root device
	[  +0.086304] kauditd_printk_skb: 30 callbacks suppressed
	[ +12.756200] systemd-fstab-generator[1459]: Ignoring "noauto" option for root device
	[  +0.127187] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.404043] kauditd_printk_skb: 56 callbacks suppressed
	[Mar18 13:13] kauditd_printk_skb: 18 callbacks suppressed
	[Mar18 13:18] systemd-fstab-generator[2794]: Ignoring "noauto" option for root device
	[  +0.174834] systemd-fstab-generator[2806]: Ignoring "noauto" option for root device
	[  +0.180142] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.142220] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +0.265692] systemd-fstab-generator[2856]: Ignoring "noauto" option for root device
	[  +0.781929] systemd-fstab-generator[2957]: Ignoring "noauto" option for root device
	[  +1.712960] systemd-fstab-generator[3079]: Ignoring "noauto" option for root device
	[  +4.646688] kauditd_printk_skb: 184 callbacks suppressed
	[ +12.039916] kauditd_printk_skb: 32 callbacks suppressed
	[  +0.755945] systemd-fstab-generator[3892]: Ignoring "noauto" option for root device
	[Mar18 13:19] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2f3ec2a2b2ec33e092d68609e9f20300ee6f43760a0bc28aa75e7918936f8536] <==
	{"level":"warn","ts":"2024-03-18T13:13:45.999888Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:13:45.673255Z","time spent":"326.499636ms","remote":"127.0.0.1:55712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1318,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-p4tb6\" mod_revision:569 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" value_size:1264 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" > >"}
	{"level":"info","ts":"2024-03-18T13:13:46.000129Z","caller":"traceutil/trace.go:171","msg":"trace[1483217109] linearizableReadLoop","detail":"{readStateIndex:603; appliedIndex:602; }","duration":"325.991506ms","start":"2024-03-18T13:13:45.674118Z","end":"2024-03-18T13:13:46.00011Z","steps":["trace[1483217109] 'read index received'  (duration: 116.88235ms)","trace[1483217109] 'applied index is now lower than readState.Index'  (duration: 209.108013ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T13:13:46.000194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"326.086657ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-p4tb6\" ","response":"range_response_count:1 size:1333"}
	{"level":"info","ts":"2024-03-18T13:13:46.000212Z","caller":"traceutil/trace.go:171","msg":"trace[2145937851] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-p4tb6; range_end:; response_count:1; response_revision:570; }","duration":"326.120015ms","start":"2024-03-18T13:13:45.674084Z","end":"2024-03-18T13:13:46.000204Z","steps":["trace[2145937851] 'agreement among raft nodes before linearized reading'  (duration: 326.069376ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:13:46.000234Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:13:45.674069Z","time spent":"326.159518ms","remote":"127.0.0.1:55712","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":1356,"request content":"key:\"/registry/certificatesigningrequests/csr-p4tb6\" "}
	{"level":"warn","ts":"2024-03-18T13:13:46.365631Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.375904ms","expected-duration":"100ms","prefix":"","request":"header:<ID:646985977988628741 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-p4tb6\" mod_revision:570 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" value_size:2297 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-18T13:13:46.365751Z","caller":"traceutil/trace.go:171","msg":"trace[1007431203] linearizableReadLoop","detail":"{readStateIndex:605; appliedIndex:604; }","duration":"232.004809ms","start":"2024-03-18T13:13:46.133736Z","end":"2024-03-18T13:13:46.365741Z","steps":["trace[1007431203] 'read index received'  (duration: 22.314723ms)","trace[1007431203] 'applied index is now lower than readState.Index'  (duration: 209.688969ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T13:13:46.365908Z","caller":"traceutil/trace.go:171","msg":"trace[1024785790] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"356.238544ms","start":"2024-03-18T13:13:46.009661Z","end":"2024-03-18T13:13:46.365899Z","steps":["trace[1024785790] 'process raft request'  (duration: 146.546257ms)","trace[1024785790] 'compare'  (duration: 209.042824ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T13:13:46.365981Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:13:46.009637Z","time spent":"356.310619ms","remote":"127.0.0.1:55712","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2351,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-p4tb6\" mod_revision:570 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" value_size:2297 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-p4tb6\" > >"}
	{"level":"warn","ts":"2024-03-18T13:13:46.366021Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"231.141342ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T13:13:46.366094Z","caller":"traceutil/trace.go:171","msg":"trace[1209698970] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:571; }","duration":"231.219268ms","start":"2024-03-18T13:13:46.134866Z","end":"2024-03-18T13:13:46.366085Z","steps":["trace[1209698970] 'agreement among raft nodes before linearized reading'  (duration: 231.114262ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:13:46.36623Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"232.555484ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-18T13:13:46.366278Z","caller":"traceutil/trace.go:171","msg":"trace[550660470] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:571; }","duration":"232.606295ms","start":"2024-03-18T13:13:46.133665Z","end":"2024-03-18T13:13:46.366272Z","steps":["trace[550660470] 'agreement among raft nodes before linearized reading'  (duration: 232.5385ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:14:02.626858Z","caller":"traceutil/trace.go:171","msg":"trace[1187718306] transaction","detail":"{read_only:false; response_revision:630; number_of_response:1; }","duration":"119.572765ms","start":"2024-03-18T13:14:02.507209Z","end":"2024-03-18T13:14:02.626782Z","steps":["trace[1187718306] 'process raft request'  (duration: 119.410239ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:16:49.969402Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-18T13:16:49.969618Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-229365","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.156:2380"],"advertise-client-urls":["https://192.168.39.156:2379"]}
	{"level":"warn","ts":"2024-03-18T13:16:49.969782Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:16:49.969972Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	WARNING: 2024/03/18 13:16:49 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-18T13:16:50.050911Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.156:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:16:50.050976Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.156:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T13:16:50.051038Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"45ea9d8f303c08fa","current-leader-member-id":"45ea9d8f303c08fa"}
	{"level":"info","ts":"2024-03-18T13:16:50.053637Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.156:2380"}
	{"level":"info","ts":"2024-03-18T13:16:50.053848Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.156:2380"}
	{"level":"info","ts":"2024-03-18T13:16:50.053915Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-229365","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.156:2380"],"advertise-client-urls":["https://192.168.39.156:2379"]}
	
	
	==> etcd [909de68b9ffeb63cdc62969865653b738c51939e0f04763d3dc44ecac47ca541] <==
	{"level":"info","ts":"2024-03-18T13:18:26.129621Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T13:18:26.129771Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-18T13:18:26.148426Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:18:26.157199Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:18:26.157222Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:18:26.157511Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.156:2380"}
	{"level":"info","ts":"2024-03-18T13:18:26.157554Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.156:2380"}
	{"level":"info","ts":"2024-03-18T13:18:26.162191Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa switched to configuration voters=(5038012371482446074)"}
	{"level":"info","ts":"2024-03-18T13:18:26.167296Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d1f5bcbb1e4f2572","local-member-id":"45ea9d8f303c08fa","added-peer-id":"45ea9d8f303c08fa","added-peer-peer-urls":["https://192.168.39.156:2380"]}
	{"level":"info","ts":"2024-03-18T13:18:26.16747Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d1f5bcbb1e4f2572","local-member-id":"45ea9d8f303c08fa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:18:26.167534Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:18:27.182968Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T13:18:27.183048Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:18:27.183085Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa received MsgPreVoteResp from 45ea9d8f303c08fa at term 2"}
	{"level":"info","ts":"2024-03-18T13:18:27.183098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T13:18:27.183103Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa received MsgVoteResp from 45ea9d8f303c08fa at term 3"}
	{"level":"info","ts":"2024-03-18T13:18:27.183113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"45ea9d8f303c08fa became leader at term 3"}
	{"level":"info","ts":"2024-03-18T13:18:27.18319Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 45ea9d8f303c08fa elected leader 45ea9d8f303c08fa at term 3"}
	{"level":"info","ts":"2024-03-18T13:18:27.185877Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"45ea9d8f303c08fa","local-member-attributes":"{Name:multinode-229365 ClientURLs:[https://192.168.39.156:2379]}","request-path":"/0/members/45ea9d8f303c08fa/attributes","cluster-id":"d1f5bcbb1e4f2572","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:18:27.186044Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:18:27.186204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:18:27.187473Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:18:27.187656Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:18:27.187698Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T13:18:27.220665Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.156:2379"}
	
	
	==> kernel <==
	 13:22:15 up 10 min,  0 users,  load average: 0.10, 0.28, 0.18
	Linux multinode-229365 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [1ed0bf4243c867d1a2add9c16e6e71ce1ae77906b2c6321cbe11b1c33a9d196b] <==
	I0318 13:16:00.892681       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	I0318 13:16:10.901593       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:16:10.901883       1 main.go:227] handling current node
	I0318 13:16:10.901956       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:16:10.901992       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:16:10.902136       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:16:10.902168       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	I0318 13:16:20.915610       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:16:20.915660       1 main.go:227] handling current node
	I0318 13:16:20.915670       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:16:20.915676       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:16:20.915870       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:16:20.915878       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	I0318 13:16:30.923875       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:16:30.923927       1 main.go:227] handling current node
	I0318 13:16:30.923937       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:16:30.923943       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:16:30.924134       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:16:30.924168       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	I0318 13:16:40.941447       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:16:40.941507       1 main.go:227] handling current node
	I0318 13:16:40.941522       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:16:40.941529       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:16:40.941672       1 main.go:223] Handling node with IPs: map[192.168.39.34:{}]
	I0318 13:16:40.941703       1 main.go:250] Node multinode-229365-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [b79bb5b5fff7b50b0330cb014a1df22e84215a782ada7bc3fa6966a0c064f000] <==
	I0318 13:21:10.742348       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:21:20.749077       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:21:20.749130       1 main.go:227] handling current node
	I0318 13:21:20.749140       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:21:20.749147       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:21:30.762703       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:21:30.762770       1 main.go:227] handling current node
	I0318 13:21:30.762846       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:21:30.762854       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:21:40.775027       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:21:40.775091       1 main.go:227] handling current node
	I0318 13:21:40.775101       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:21:40.775107       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:21:50.788338       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:21:50.788384       1 main.go:227] handling current node
	I0318 13:21:50.788395       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:21:50.788401       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:22:00.802538       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:22:00.802611       1 main.go:227] handling current node
	I0318 13:22:00.802622       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:22:00.802628       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	I0318 13:22:10.808865       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0318 13:22:10.808921       1 main.go:227] handling current node
	I0318 13:22:10.808955       1 main.go:223] Handling node with IPs: map[192.168.39.29:{}]
	I0318 13:22:10.808962       1 main.go:250] Node multinode-229365-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [07436321da95fc350373a1cacd27efc68f68a043a287e0def7655c4e07ace1f1] <==
	I0318 13:12:00.647409       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:12:00.687928       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:12:00.755528       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0318 13:12:00.763250       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.156]
	I0318 13:12:00.764271       1 controller.go:624] quota admission added evaluator for: endpoints
	I0318 13:12:00.768734       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:12:01.165684       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:12:02.372547       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:12:02.389966       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0318 13:12:02.409374       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:12:14.697602       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0318 13:12:15.162368       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0318 13:16:49.959846       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	E0318 13:16:49.990066       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 13:16:49.990169       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	E0318 13:16:49.990236       1 watcher.go:249] watch chan error: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	W0318 13:16:49.997033       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:49.997551       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:49.998012       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.007442       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.007898       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.008042       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.008746       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.009434       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0318 13:16:50.009538       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [8dc67fccd3fe04505303c6c10812d6cf0de31a255788d0152635133cf2e7b60d] <==
	I0318 13:18:28.577692       1 shared_informer.go:311] Waiting for caches to sync for cluster_authentication_trust_controller
	I0318 13:18:28.662454       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:18:28.662564       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:18:28.698380       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:18:28.753735       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:18:28.753904       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:18:28.757195       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:18:28.757517       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:18:28.757558       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:18:28.757565       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:18:28.757570       1 cache.go:39] Caches are synced for autoregister controller
	E0318 13:18:28.762093       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0318 13:18:28.765429       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:18:28.766757       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:18:28.766845       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:18:28.777632       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:18:28.777783       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:18:29.582705       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:18:31.429282       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:18:31.557111       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:18:31.566323       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:18:31.642313       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:18:31.654675       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:18:41.407519       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:18:41.411989       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2dceda195e3c4ef42a31a2a4418cb8dd7c5f9c0198cac9073992229ffde86404] <==
	I0318 13:19:09.512318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="45.377µs"
	I0318 13:19:11.904865       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="78.001µs"
	I0318 13:19:16.641095       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:19:16.658704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="38.499µs"
	I0318 13:19:16.674567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="51.643µs"
	I0318 13:19:19.626379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="6.368823ms"
	I0318 13:19:19.627020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.574µs"
	I0318 13:19:21.365652       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-q6bt8" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-q6bt8"
	I0318 13:19:37.234157       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:19:39.751533       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229365-m03\" does not exist"
	I0318 13:19:39.753993       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:19:39.768683       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-229365-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:19:47.428294       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:19:53.315411       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:19:56.386351       1 event.go:307] "Event occurred" object="multinode-229365-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-229365-m03 event: Removing Node multinode-229365-m03 from Controller"
	I0318 13:20:31.405557       1 event.go:307] "Event occurred" object="multinode-229365-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-229365-m02 status is now: NodeNotReady"
	I0318 13:20:31.421272       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-q6bt8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:20:31.435481       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.890557ms"
	I0318 13:20:31.437156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="52.41µs"
	I0318 13:20:31.442573       1 event.go:307] "Event occurred" object="kube-system/kindnet-jmf7p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:20:31.457191       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ll5m7" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:20:41.313308       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-w5prk"
	I0318 13:20:41.341707       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-w5prk"
	I0318 13:20:41.343659       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-kcrqn"
	I0318 13:20:41.373357       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-kcrqn"
	
	
	==> kube-controller-manager [be9f7f6d2ab2c134a06b968b0872b8cb611247f92c6cb9290cc429fd8df53875] <==
	I0318 13:13:11.705540       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="6.22142ms"
	I0318 13:13:11.707234       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="265.163µs"
	I0318 13:13:47.203542       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:13:47.204730       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229365-m03\" does not exist"
	I0318 13:13:47.233563       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w5prk"
	I0318 13:13:47.237262       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-kcrqn"
	I0318 13:13:47.240984       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-229365-m03" podCIDRs=["10.244.2.0/24"]
	I0318 13:13:49.815456       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-229365-m03"
	I0318 13:13:49.815709       1 event.go:307] "Event occurred" object="multinode-229365-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-229365-m03 event: Registered Node multinode-229365-m03 in Controller"
	I0318 13:13:57.742907       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:14:30.096076       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:14:32.885602       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-229365-m03\" does not exist"
	I0318 13:14:32.886947       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:14:32.907741       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-229365-m03" podCIDRs=["10.244.3.0/24"]
	I0318 13:14:43.322499       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:15:24.875440       1 event.go:307] "Event occurred" object="multinode-229365-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-229365-m03 status is now: NodeNotReady"
	I0318 13:15:24.876502       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-229365-m02"
	I0318 13:15:24.892477       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-kcrqn" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:15:24.909055       1 event.go:307] "Event occurred" object="kube-system/kindnet-w5prk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:15:29.921328       1 event.go:307] "Event occurred" object="multinode-229365-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-229365-m02 status is now: NodeNotReady"
	I0318 13:15:29.939086       1 event.go:307] "Event occurred" object="kube-system/kindnet-jmf7p" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:15:29.963725       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ll5m7" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:15:29.976875       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-pjdnm" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0318 13:15:29.983573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.126664ms"
	I0318 13:15:29.984170       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="137.633µs"
	
	
	==> kube-proxy [151dba6a079a7fdc667d950d8f470b7279ae09ee891e6b0aa1c49a5bfd50ad51] <==
	I0318 13:18:29.898944       1 server_others.go:69] "Using iptables proxy"
	I0318 13:18:29.940379       1 node.go:141] Successfully retrieved node IP: 192.168.39.156
	I0318 13:18:30.022457       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:18:30.022599       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:18:30.026852       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:18:30.027144       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:18:30.027555       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:18:30.027984       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:18:30.029583       1 config.go:188] "Starting service config controller"
	I0318 13:18:30.029878       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:18:30.030409       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:18:30.030522       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:18:30.032499       1 config.go:315] "Starting node config controller"
	I0318 13:18:30.033909       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:18:30.131017       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:18:30.131095       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:18:30.135307       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [e592bd1c5e3a49aa85b970f22ac18459d8194ce47bc678edd833610ba2a2c25e] <==
	I0318 13:12:16.505084       1 server_others.go:69] "Using iptables proxy"
	I0318 13:12:16.531106       1 node.go:141] Successfully retrieved node IP: 192.168.39.156
	I0318 13:12:16.585366       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:12:16.585437       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:12:16.588343       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:12:16.589205       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:12:16.589478       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:12:16.589515       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:12:16.591399       1 config.go:188] "Starting service config controller"
	I0318 13:12:16.592167       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:12:16.592264       1 config.go:315] "Starting node config controller"
	I0318 13:12:16.592302       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:12:16.593136       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:12:16.597060       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:12:16.693392       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:12:16.693414       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:12:16.697716       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [19006263151a38c6b497fb97e6b2cb21eefd9842ec9e068c25547ce8d19daf26] <==
	E0318 13:11:59.189920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 13:11:59.187211       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:11:59.190220       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:12:00.035127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:12:00.035328       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:12:00.043734       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:12:00.043902       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:12:00.270346       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:12:00.270724       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:12:00.296754       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:12:00.296906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:12:00.325967       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 13:12:00.325992       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 13:12:00.339157       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:12:00.339346       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:12:00.351777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 13:12:00.352312       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 13:12:00.352189       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:12:00.352545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:12:00.365021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:12:00.365080       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0318 13:12:02.178668       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:16:49.977285       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:16:49.983085       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0318 13:16:49.983513       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [84cd5e0c1d2fd1fecc58b212fc8ad4cfbcc77bc38679000d8a6752bc84f8db10] <==
	I0318 13:18:26.889385       1 serving.go:348] Generated self-signed cert in-memory
	W0318 13:18:28.654433       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 13:18:28.654519       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:18:28.654549       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 13:18:28.654573       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:18:28.705286       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:18:28.705333       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:18:28.707005       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:18:28.707146       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:18:28.707526       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:18:28.707629       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:18:28.807901       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 13:20:25 multinode-229365 kubelet[3086]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:20:25 multinode-229365 kubelet[3086]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 13:20:25 multinode-229365 kubelet[3086]: E0318 13:20:25.108622    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod47b9b389eeab8ea23a39be0a8c622392/crio-c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5: Error finding container c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5: Status 404 returned error can't find the container with id c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5
	Mar 18 13:20:25 multinode-229365 kubelet[3086]: E0318 13:20:25.109160    3086 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode9702ef6-2066-470d-a8c9-d0857dc8b63a/crio-2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63: Error finding container 2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63: Status 404 returned error can't find the container with id 2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63
	Mar 18 13:20:25 multinode-229365 kubelet[3086]: E0318 13:20:25.109550    3086 manager.go:1106] Failed to create existing container: /kubepods/poda92bfa0e-6f47-44a9-a32c-9628f567e5bc/crio-f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b: Error finding container f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b: Status 404 returned error can't find the container with id f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b
	Mar 18 13:20:25 multinode-229365 kubelet[3086]: E0318 13:20:25.110021    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod3c6e16db-16e4-468f-919c-df4c54cf0e94/crio-54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e: Error finding container 54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e: Status 404 returned error can't find the container with id 54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e
	Mar 18 13:20:25 multinode-229365 kubelet[3086]: E0318 13:20:25.110347    3086 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d/crio-bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46: Error finding container bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46: Status 404 returned error can't find the container with id bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46
	Mar 18 13:20:25 multinode-229365 kubelet[3086]: E0318 13:20:25.110664    3086 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod6e762a2b-2d25-4f3e-8860-192c60a97ad8/crio-f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15: Error finding container f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15: Status 404 returned error can't find the container with id f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15
	Mar 18 13:20:25 multinode-229365 kubelet[3086]: E0318 13:20:25.111097    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod326c3bfa26902a35a907a995f7624593/crio-22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960: Error finding container 22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960: Status 404 returned error can't find the container with id 22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960
	Mar 18 13:20:25 multinode-229365 kubelet[3086]: E0318 13:20:25.111464    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod66e2564c3f5ce1cdf5c73a3d12c95511/crio-c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07: Error finding container c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07: Status 404 returned error can't find the container with id c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07
	Mar 18 13:20:25 multinode-229365 kubelet[3086]: E0318 13:20:25.111738    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod131ed49275b5405a33eedc6996906d41/crio-74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d: Error finding container 74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d: Status 404 returned error can't find the container with id 74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d
	Mar 18 13:21:25 multinode-229365 kubelet[3086]: E0318 13:21:25.108672    3086 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode9702ef6-2066-470d-a8c9-d0857dc8b63a/crio-2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63: Error finding container 2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63: Status 404 returned error can't find the container with id 2b91079d0b985aa4dbab7e328b0acbe4b8f84743d72e541c17662e32579bba63
	Mar 18 13:21:25 multinode-229365 kubelet[3086]: E0318 13:21:25.109586    3086 manager.go:1106] Failed to create existing container: /kubepods/poda92bfa0e-6f47-44a9-a32c-9628f567e5bc/crio-f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b: Error finding container f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b: Status 404 returned error can't find the container with id f26b8c9a42276850fa34573edf63a613e7105a4b040a6bb10bf8e442a9f0069b
	Mar 18 13:21:25 multinode-229365 kubelet[3086]: E0318 13:21:25.109915    3086 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod6e762a2b-2d25-4f3e-8860-192c60a97ad8/crio-f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15: Error finding container f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15: Status 404 returned error can't find the container with id f1816dbdb10636f1bbbb75614ab65cc8ae0624719f14209f8009e2a06bf49d15
	Mar 18 13:21:25 multinode-229365 kubelet[3086]: E0318 13:21:25.110311    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod326c3bfa26902a35a907a995f7624593/crio-22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960: Error finding container 22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960: Status 404 returned error can't find the container with id 22b007523ac2586f8b4f20da04f957131182efb0693bdb7ce17fd4e112b6c960
	Mar 18 13:21:25 multinode-229365 kubelet[3086]: E0318 13:21:25.110716    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod47b9b389eeab8ea23a39be0a8c622392/crio-c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5: Error finding container c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5: Status 404 returned error can't find the container with id c1cafd55af3f245ad89e571170c0779b6e72a1d96be5720a6240d2dd3f1924c5
	Mar 18 13:21:25 multinode-229365 kubelet[3086]: E0318 13:21:25.111092    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod66e2564c3f5ce1cdf5c73a3d12c95511/crio-c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07: Error finding container c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07: Status 404 returned error can't find the container with id c9f7b9a8977124742aaee2192217bb7b805f41fb9e9c363984b88bb2926c4c07
	Mar 18 13:21:25 multinode-229365 kubelet[3086]: E0318 13:21:25.111414    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod3c6e16db-16e4-468f-919c-df4c54cf0e94/crio-54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e: Error finding container 54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e: Status 404 returned error can't find the container with id 54beb227a8ffc4885427135a60029659de226e2514fb90466196e2c6e1a6c85e
	Mar 18 13:21:25 multinode-229365 kubelet[3086]: E0318 13:21:25.111734    3086 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod131ed49275b5405a33eedc6996906d41/crio-74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d: Error finding container 74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d: Status 404 returned error can't find the container with id 74a85ae0bbe5a56166484ef57a28b7e92efbb5a04af720641f6533db7329743d
	Mar 18 13:21:25 multinode-229365 kubelet[3086]: E0318 13:21:25.112152    3086 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pode62b1b7d-04c0-47fb-9ec9-6d6e34d11c4d/crio-bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46: Error finding container bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46: Status 404 returned error can't find the container with id bcf600d198b03627f34dabd31df428dedc966e5cb3e8e22976ed87a67eabcc46
	Mar 18 13:21:25 multinode-229365 kubelet[3086]: E0318 13:21:25.113554    3086 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 13:21:25 multinode-229365 kubelet[3086]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 13:21:25 multinode-229365 kubelet[3086]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 13:21:25 multinode-229365 kubelet[3086]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 13:21:25 multinode-229365 kubelet[3086]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:22:14.385889 1143387 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18429-1106816/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-229365 -n multinode-229365
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-229365 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.74s)

                                                
                                    
x
+
TestPreload (252.74s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-251198 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0318 13:26:24.905535 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-251198 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m49.548820084s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-251198 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-251198 image pull gcr.io/k8s-minikube/busybox: (2.367115858s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-251198
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-251198: (8.309897732s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-251198 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0318 13:29:30.297198 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-251198 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m9.514365979s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-251198 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:626: *** TestPreload FAILED at 2024-03-18 13:30:09.876574907 +0000 UTC m=+4487.263489173
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-251198 -n test-preload-251198
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-251198 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-251198 logs -n 25: (1.149296341s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n multinode-229365 sudo cat                                       | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_multinode-229365-m03_multinode-229365.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-229365 cp multinode-229365-m03:/home/docker/cp-test.txt                       | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m02:/home/docker/cp-test_multinode-229365-m03_multinode-229365-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n                                                                 | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | multinode-229365-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-229365 ssh -n multinode-229365-m02 sudo cat                                   | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | /home/docker/cp-test_multinode-229365-m03_multinode-229365-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-229365 node stop m03                                                          | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	| node    | multinode-229365 node start                                                             | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC | 18 Mar 24 13:14 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-229365                                                                | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC |                     |
	| stop    | -p multinode-229365                                                                     | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:14 UTC |                     |
	| start   | -p multinode-229365                                                                     | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:16 UTC | 18 Mar 24 13:19 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-229365                                                                | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:19 UTC |                     |
	| node    | multinode-229365 node delete                                                            | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:19 UTC | 18 Mar 24 13:19 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-229365 stop                                                                   | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:19 UTC |                     |
	| start   | -p multinode-229365                                                                     | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:22 UTC | 18 Mar 24 13:25 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-229365                                                                | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:25 UTC |                     |
	| start   | -p multinode-229365-m02                                                                 | multinode-229365-m02 | jenkins | v1.32.0 | 18 Mar 24 13:25 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-229365-m03                                                                 | multinode-229365-m03 | jenkins | v1.32.0 | 18 Mar 24 13:25 UTC | 18 Mar 24 13:25 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-229365                                                                 | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:25 UTC |                     |
	| delete  | -p multinode-229365-m03                                                                 | multinode-229365-m03 | jenkins | v1.32.0 | 18 Mar 24 13:25 UTC | 18 Mar 24 13:25 UTC |
	| delete  | -p multinode-229365                                                                     | multinode-229365     | jenkins | v1.32.0 | 18 Mar 24 13:25 UTC | 18 Mar 24 13:25 UTC |
	| start   | -p test-preload-251198                                                                  | test-preload-251198  | jenkins | v1.32.0 | 18 Mar 24 13:25 UTC | 18 Mar 24 13:28 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-251198 image pull                                                          | test-preload-251198  | jenkins | v1.32.0 | 18 Mar 24 13:28 UTC | 18 Mar 24 13:28 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-251198                                                                  | test-preload-251198  | jenkins | v1.32.0 | 18 Mar 24 13:28 UTC | 18 Mar 24 13:29 UTC |
	| start   | -p test-preload-251198                                                                  | test-preload-251198  | jenkins | v1.32.0 | 18 Mar 24 13:29 UTC | 18 Mar 24 13:30 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-251198 image list                                                          | test-preload-251198  | jenkins | v1.32.0 | 18 Mar 24 13:30 UTC | 18 Mar 24 13:30 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:29:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:29:00.182614 1145570 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:29:00.183136 1145570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:29:00.183160 1145570 out.go:304] Setting ErrFile to fd 2...
	I0318 13:29:00.183167 1145570 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:29:00.183642 1145570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:29:00.184695 1145570 out.go:298] Setting JSON to false
	I0318 13:29:00.185693 1145570 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18687,"bootTime":1710749853,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:29:00.185758 1145570 start.go:139] virtualization: kvm guest
	I0318 13:29:00.187803 1145570 out.go:177] * [test-preload-251198] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:29:00.189677 1145570 notify.go:220] Checking for updates...
	I0318 13:29:00.189702 1145570 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:29:00.191275 1145570 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:29:00.192583 1145570 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:29:00.193954 1145570 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:29:00.195300 1145570 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:29:00.196650 1145570 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:29:00.198281 1145570 config.go:182] Loaded profile config "test-preload-251198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0318 13:29:00.198682 1145570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:29:00.198738 1145570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:29:00.213697 1145570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46381
	I0318 13:29:00.214169 1145570 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:29:00.214749 1145570 main.go:141] libmachine: Using API Version  1
	I0318 13:29:00.214771 1145570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:29:00.215088 1145570 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:29:00.215271 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	I0318 13:29:00.217285 1145570 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 13:29:00.218609 1145570 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:29:00.218925 1145570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:29:00.218974 1145570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:29:00.233819 1145570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34873
	I0318 13:29:00.234273 1145570 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:29:00.234820 1145570 main.go:141] libmachine: Using API Version  1
	I0318 13:29:00.234840 1145570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:29:00.235180 1145570 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:29:00.235413 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	I0318 13:29:00.270219 1145570 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:29:00.271718 1145570 start.go:297] selected driver: kvm2
	I0318 13:29:00.271728 1145570 start.go:901] validating driver "kvm2" against &{Name:test-preload-251198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-251198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.133 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:29:00.271854 1145570 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:29:00.272577 1145570 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:29:00.272660 1145570 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:29:00.287848 1145570 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:29:00.288161 1145570 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:29:00.288232 1145570 cni.go:84] Creating CNI manager for ""
	I0318 13:29:00.288244 1145570 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:29:00.288294 1145570 start.go:340] cluster config:
	{Name:test-preload-251198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-251198 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.133 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:29:00.288432 1145570 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:29:00.290356 1145570 out.go:177] * Starting "test-preload-251198" primary control-plane node in "test-preload-251198" cluster
	I0318 13:29:00.291546 1145570 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0318 13:29:00.394999 1145570 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:29:00.395030 1145570 cache.go:56] Caching tarball of preloaded images
	I0318 13:29:00.395219 1145570 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0318 13:29:00.397336 1145570 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0318 13:29:00.398759 1145570 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 13:29:00.510807 1145570 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:29:12.624070 1145570 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 13:29:12.624190 1145570 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 13:29:13.483438 1145570 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I0318 13:29:13.483620 1145570 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/config.json ...
	I0318 13:29:13.483897 1145570 start.go:360] acquireMachinesLock for test-preload-251198: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:29:13.483987 1145570 start.go:364] duration metric: took 62.409µs to acquireMachinesLock for "test-preload-251198"
	I0318 13:29:13.484009 1145570 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:29:13.484022 1145570 fix.go:54] fixHost starting: 
	I0318 13:29:13.484391 1145570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:29:13.484438 1145570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:29:13.499284 1145570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36093
	I0318 13:29:13.499824 1145570 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:29:13.500301 1145570 main.go:141] libmachine: Using API Version  1
	I0318 13:29:13.500346 1145570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:29:13.500713 1145570 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:29:13.500913 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	I0318 13:29:13.501065 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetState
	I0318 13:29:13.502878 1145570 fix.go:112] recreateIfNeeded on test-preload-251198: state=Stopped err=<nil>
	I0318 13:29:13.502911 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	W0318 13:29:13.503090 1145570 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:29:13.506025 1145570 out.go:177] * Restarting existing kvm2 VM for "test-preload-251198" ...
	I0318 13:29:13.507563 1145570 main.go:141] libmachine: (test-preload-251198) Calling .Start
	I0318 13:29:13.507738 1145570 main.go:141] libmachine: (test-preload-251198) Ensuring networks are active...
	I0318 13:29:13.508453 1145570 main.go:141] libmachine: (test-preload-251198) Ensuring network default is active
	I0318 13:29:13.508834 1145570 main.go:141] libmachine: (test-preload-251198) Ensuring network mk-test-preload-251198 is active
	I0318 13:29:13.509271 1145570 main.go:141] libmachine: (test-preload-251198) Getting domain xml...
	I0318 13:29:13.509996 1145570 main.go:141] libmachine: (test-preload-251198) Creating domain...
	I0318 13:29:14.683848 1145570 main.go:141] libmachine: (test-preload-251198) Waiting to get IP...
	I0318 13:29:14.684749 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:14.685161 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:14.685246 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:14.685140 1145638 retry.go:31] will retry after 266.063794ms: waiting for machine to come up
	I0318 13:29:14.952559 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:14.953015 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:14.953061 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:14.952983 1145638 retry.go:31] will retry after 249.889922ms: waiting for machine to come up
	I0318 13:29:15.204517 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:15.204910 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:15.204961 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:15.204851 1145638 retry.go:31] will retry after 442.291975ms: waiting for machine to come up
	I0318 13:29:15.648447 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:15.648789 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:15.648815 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:15.648746 1145638 retry.go:31] will retry after 566.036169ms: waiting for machine to come up
	I0318 13:29:16.216436 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:16.216913 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:16.216938 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:16.216873 1145638 retry.go:31] will retry after 469.463483ms: waiting for machine to come up
	I0318 13:29:16.687463 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:16.687816 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:16.687848 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:16.687760 1145638 retry.go:31] will retry after 668.422374ms: waiting for machine to come up
	I0318 13:29:17.357342 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:17.357909 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:17.357941 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:17.357846 1145638 retry.go:31] will retry after 916.389468ms: waiting for machine to come up
	I0318 13:29:18.275933 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:18.276368 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:18.276399 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:18.276315 1145638 retry.go:31] will retry after 1.244828058s: waiting for machine to come up
	I0318 13:29:19.523478 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:19.523879 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:19.523911 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:19.523822 1145638 retry.go:31] will retry after 1.274778741s: waiting for machine to come up
	I0318 13:29:20.800210 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:20.800658 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:20.800689 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:20.800622 1145638 retry.go:31] will retry after 2.167069304s: waiting for machine to come up
	I0318 13:29:22.971147 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:22.971616 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:22.971650 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:22.971551 1145638 retry.go:31] will retry after 2.756607699s: waiting for machine to come up
	I0318 13:29:25.729959 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:25.730430 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:25.730459 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:25.730379 1145638 retry.go:31] will retry after 3.187677274s: waiting for machine to come up
	I0318 13:29:28.920247 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:28.920602 1145570 main.go:141] libmachine: (test-preload-251198) DBG | unable to find current IP address of domain test-preload-251198 in network mk-test-preload-251198
	I0318 13:29:28.920632 1145570 main.go:141] libmachine: (test-preload-251198) DBG | I0318 13:29:28.920551 1145638 retry.go:31] will retry after 3.682501218s: waiting for machine to come up
	I0318 13:29:32.607321 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.607748 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has current primary IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.607763 1145570 main.go:141] libmachine: (test-preload-251198) Found IP for machine: 192.168.39.133
	I0318 13:29:32.607772 1145570 main.go:141] libmachine: (test-preload-251198) Reserving static IP address...
	I0318 13:29:32.608237 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "test-preload-251198", mac: "52:54:00:9e:74:1f", ip: "192.168.39.133"} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:32.608277 1145570 main.go:141] libmachine: (test-preload-251198) Reserved static IP address: 192.168.39.133
	I0318 13:29:32.608296 1145570 main.go:141] libmachine: (test-preload-251198) DBG | skip adding static IP to network mk-test-preload-251198 - found existing host DHCP lease matching {name: "test-preload-251198", mac: "52:54:00:9e:74:1f", ip: "192.168.39.133"}
	I0318 13:29:32.608313 1145570 main.go:141] libmachine: (test-preload-251198) DBG | Getting to WaitForSSH function...
	I0318 13:29:32.608345 1145570 main.go:141] libmachine: (test-preload-251198) Waiting for SSH to be available...
	I0318 13:29:32.610349 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.610638 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:32.610670 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.610781 1145570 main.go:141] libmachine: (test-preload-251198) DBG | Using SSH client type: external
	I0318 13:29:32.610849 1145570 main.go:141] libmachine: (test-preload-251198) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/test-preload-251198/id_rsa (-rw-------)
	I0318 13:29:32.610892 1145570 main.go:141] libmachine: (test-preload-251198) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/test-preload-251198/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:29:32.610916 1145570 main.go:141] libmachine: (test-preload-251198) DBG | About to run SSH command:
	I0318 13:29:32.610931 1145570 main.go:141] libmachine: (test-preload-251198) DBG | exit 0
	I0318 13:29:32.732561 1145570 main.go:141] libmachine: (test-preload-251198) DBG | SSH cmd err, output: <nil>: 
	I0318 13:29:32.732966 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetConfigRaw
	I0318 13:29:32.733616 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetIP
	I0318 13:29:32.736174 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.736530 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:32.736562 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.736854 1145570 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/config.json ...
	I0318 13:29:32.737042 1145570 machine.go:94] provisionDockerMachine start ...
	I0318 13:29:32.737063 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	I0318 13:29:32.737277 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:32.739627 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.740153 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:32.740186 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.740314 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHPort
	I0318 13:29:32.740517 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:32.740690 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:32.740845 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHUsername
	I0318 13:29:32.741032 1145570 main.go:141] libmachine: Using SSH client type: native
	I0318 13:29:32.741217 1145570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0318 13:29:32.741228 1145570 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:29:32.845258 1145570 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:29:32.845294 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetMachineName
	I0318 13:29:32.845584 1145570 buildroot.go:166] provisioning hostname "test-preload-251198"
	I0318 13:29:32.845610 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetMachineName
	I0318 13:29:32.845786 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:32.848400 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.848757 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:32.848802 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.848947 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHPort
	I0318 13:29:32.849139 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:32.849322 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:32.849435 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHUsername
	I0318 13:29:32.849607 1145570 main.go:141] libmachine: Using SSH client type: native
	I0318 13:29:32.849800 1145570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0318 13:29:32.849818 1145570 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-251198 && echo "test-preload-251198" | sudo tee /etc/hostname
	I0318 13:29:32.968063 1145570 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-251198
	
	I0318 13:29:32.968089 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:32.970845 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.971215 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:32.971238 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:32.971432 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHPort
	I0318 13:29:32.971617 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:32.971771 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:32.971889 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHUsername
	I0318 13:29:32.972067 1145570 main.go:141] libmachine: Using SSH client type: native
	I0318 13:29:32.972237 1145570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0318 13:29:32.972256 1145570 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-251198' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-251198/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-251198' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:29:33.083100 1145570 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:29:33.083132 1145570 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:29:33.083171 1145570 buildroot.go:174] setting up certificates
	I0318 13:29:33.083180 1145570 provision.go:84] configureAuth start
	I0318 13:29:33.083189 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetMachineName
	I0318 13:29:33.083503 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetIP
	I0318 13:29:33.086160 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.086470 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:33.086500 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.086651 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:33.088985 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.089399 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:33.089416 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.089586 1145570 provision.go:143] copyHostCerts
	I0318 13:29:33.089657 1145570 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:29:33.089668 1145570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:29:33.089731 1145570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:29:33.089829 1145570 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:29:33.089841 1145570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:29:33.089865 1145570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:29:33.089915 1145570 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:29:33.089923 1145570 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:29:33.089942 1145570 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:29:33.089989 1145570 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.test-preload-251198 san=[127.0.0.1 192.168.39.133 localhost minikube test-preload-251198]
	I0318 13:29:33.134729 1145570 provision.go:177] copyRemoteCerts
	I0318 13:29:33.134794 1145570 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:29:33.134820 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:33.137290 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.137541 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:33.137574 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.137743 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHPort
	I0318 13:29:33.137932 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:33.138069 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHUsername
	I0318 13:29:33.138210 1145570 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/test-preload-251198/id_rsa Username:docker}
	I0318 13:29:33.219773 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:29:33.250617 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 13:29:33.276991 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:29:33.302651 1145570 provision.go:87] duration metric: took 219.457716ms to configureAuth
	I0318 13:29:33.302680 1145570 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:29:33.302889 1145570 config.go:182] Loaded profile config "test-preload-251198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0318 13:29:33.302982 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:33.305510 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.305876 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:33.305910 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.306075 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHPort
	I0318 13:29:33.306307 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:33.306510 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:33.306679 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHUsername
	I0318 13:29:33.306859 1145570 main.go:141] libmachine: Using SSH client type: native
	I0318 13:29:33.307041 1145570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0318 13:29:33.307063 1145570 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:29:33.577628 1145570 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:29:33.577661 1145570 machine.go:97] duration metric: took 840.605164ms to provisionDockerMachine
	I0318 13:29:33.577673 1145570 start.go:293] postStartSetup for "test-preload-251198" (driver="kvm2")
	I0318 13:29:33.577684 1145570 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:29:33.577702 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	I0318 13:29:33.578099 1145570 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:29:33.578136 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:33.580573 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.580968 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:33.581006 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.581182 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHPort
	I0318 13:29:33.581369 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:33.581544 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHUsername
	I0318 13:29:33.581689 1145570 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/test-preload-251198/id_rsa Username:docker}
	I0318 13:29:33.664201 1145570 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:29:33.669218 1145570 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:29:33.669242 1145570 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:29:33.669314 1145570 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:29:33.669395 1145570 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:29:33.669497 1145570 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:29:33.679342 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:29:33.706034 1145570 start.go:296] duration metric: took 128.345882ms for postStartSetup
	I0318 13:29:33.706077 1145570 fix.go:56] duration metric: took 20.2220566s for fixHost
	I0318 13:29:33.706105 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:33.708649 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.708996 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:33.709032 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.709217 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHPort
	I0318 13:29:33.709425 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:33.709561 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:33.709690 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHUsername
	I0318 13:29:33.709827 1145570 main.go:141] libmachine: Using SSH client type: native
	I0318 13:29:33.709999 1145570 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I0318 13:29:33.710009 1145570 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:29:33.813807 1145570 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710768573.794791515
	
	I0318 13:29:33.813833 1145570 fix.go:216] guest clock: 1710768573.794791515
	I0318 13:29:33.813841 1145570 fix.go:229] Guest: 2024-03-18 13:29:33.794791515 +0000 UTC Remote: 2024-03-18 13:29:33.706081868 +0000 UTC m=+33.572093594 (delta=88.709647ms)
	I0318 13:29:33.813911 1145570 fix.go:200] guest clock delta is within tolerance: 88.709647ms
	I0318 13:29:33.813919 1145570 start.go:83] releasing machines lock for "test-preload-251198", held for 20.32991906s
	I0318 13:29:33.813947 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	I0318 13:29:33.814241 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetIP
	I0318 13:29:33.816692 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.817074 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:33.817104 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.817238 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	I0318 13:29:33.817704 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	I0318 13:29:33.817880 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	I0318 13:29:33.817994 1145570 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:29:33.818059 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:33.818099 1145570 ssh_runner.go:195] Run: cat /version.json
	I0318 13:29:33.818123 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:33.820429 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.820546 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.820770 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:33.820810 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.820929 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHPort
	I0318 13:29:33.821038 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:33.821071 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:33.821115 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:33.821206 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHPort
	I0318 13:29:33.821297 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHUsername
	I0318 13:29:33.821366 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:33.821447 1145570 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/test-preload-251198/id_rsa Username:docker}
	I0318 13:29:33.821496 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHUsername
	I0318 13:29:33.821609 1145570 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/test-preload-251198/id_rsa Username:docker}
	I0318 13:29:33.898539 1145570 ssh_runner.go:195] Run: systemctl --version
	I0318 13:29:33.921118 1145570 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:29:34.073551 1145570 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:29:34.080876 1145570 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:29:34.080953 1145570 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:29:34.100408 1145570 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:29:34.100443 1145570 start.go:494] detecting cgroup driver to use...
	I0318 13:29:34.100534 1145570 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:29:34.119032 1145570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:29:34.134541 1145570 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:29:34.134608 1145570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:29:34.149122 1145570 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:29:34.164881 1145570 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:29:34.282982 1145570 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:29:34.429843 1145570 docker.go:233] disabling docker service ...
	I0318 13:29:34.429991 1145570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:29:34.446323 1145570 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:29:34.460290 1145570 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:29:34.592117 1145570 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:29:34.715938 1145570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:29:34.732653 1145570 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:29:34.753111 1145570 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0318 13:29:34.753187 1145570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:29:34.765042 1145570 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:29:34.765117 1145570 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:29:34.777141 1145570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:29:34.789128 1145570 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:29:34.801228 1145570 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:29:34.813823 1145570 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:29:34.824731 1145570 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:29:34.824793 1145570 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:29:34.839885 1145570 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:29:34.851417 1145570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:29:34.974331 1145570 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:29:35.125801 1145570 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:29:35.125892 1145570 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:29:35.131642 1145570 start.go:562] Will wait 60s for crictl version
	I0318 13:29:35.131713 1145570 ssh_runner.go:195] Run: which crictl
	I0318 13:29:35.136178 1145570 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:29:35.182541 1145570 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:29:35.182638 1145570 ssh_runner.go:195] Run: crio --version
	I0318 13:29:35.213088 1145570 ssh_runner.go:195] Run: crio --version
	I0318 13:29:35.246139 1145570 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I0318 13:29:35.247501 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetIP
	I0318 13:29:35.250277 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:35.250628 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:35.250653 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:35.250834 1145570 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:29:35.255462 1145570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:29:35.270075 1145570 kubeadm.go:877] updating cluster {Name:test-preload-251198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-251198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.133 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:29:35.270197 1145570 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0318 13:29:35.270259 1145570 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:29:35.309075 1145570 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0318 13:29:35.309145 1145570 ssh_runner.go:195] Run: which lz4
	I0318 13:29:35.313839 1145570 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:29:35.318645 1145570 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:29:35.318673 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0318 13:29:37.239066 1145570 crio.go:444] duration metric: took 1.925254648s to copy over tarball
	I0318 13:29:37.239139 1145570 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:29:39.935493 1145570 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.696321227s)
	I0318 13:29:39.935531 1145570 crio.go:451] duration metric: took 2.696430561s to extract the tarball
	I0318 13:29:39.935539 1145570 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:29:39.977531 1145570 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:29:40.025098 1145570 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0318 13:29:40.025128 1145570 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:29:40.025261 1145570 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 13:29:40.025290 1145570 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 13:29:40.025296 1145570 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 13:29:40.025303 1145570 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0318 13:29:40.025346 1145570 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 13:29:40.025261 1145570 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:29:40.025264 1145570 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:29:40.025261 1145570 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:29:40.026927 1145570 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:29:40.026943 1145570 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 13:29:40.026953 1145570 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0318 13:29:40.026947 1145570 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:29:40.026977 1145570 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 13:29:40.026927 1145570 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 13:29:40.026927 1145570 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:29:40.026927 1145570 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 13:29:40.206099 1145570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:29:40.241581 1145570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0318 13:29:40.257744 1145570 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0318 13:29:40.257784 1145570 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:29:40.257834 1145570 ssh_runner.go:195] Run: which crictl
	I0318 13:29:40.292175 1145570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0318 13:29:40.295565 1145570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0318 13:29:40.295773 1145570 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0318 13:29:40.295816 1145570 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0318 13:29:40.295860 1145570 ssh_runner.go:195] Run: which crictl
	I0318 13:29:40.298067 1145570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0318 13:29:40.306600 1145570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 13:29:40.326877 1145570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0318 13:29:40.338977 1145570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0318 13:29:40.417547 1145570 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0318 13:29:40.417641 1145570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0318 13:29:40.417673 1145570 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0318 13:29:40.417770 1145570 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0318 13:29:40.417814 1145570 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0318 13:29:40.417864 1145570 ssh_runner.go:195] Run: which crictl
	I0318 13:29:40.428432 1145570 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0318 13:29:40.428485 1145570 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0318 13:29:40.428537 1145570 ssh_runner.go:195] Run: which crictl
	I0318 13:29:40.494794 1145570 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0318 13:29:40.494857 1145570 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 13:29:40.494930 1145570 ssh_runner.go:195] Run: which crictl
	I0318 13:29:40.511845 1145570 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0318 13:29:40.511898 1145570 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0318 13:29:40.511950 1145570 ssh_runner.go:195] Run: which crictl
	I0318 13:29:40.511950 1145570 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0318 13:29:40.511987 1145570 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0318 13:29:40.512035 1145570 ssh_runner.go:195] Run: which crictl
	I0318 13:29:40.512661 1145570 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0318 13:29:40.512752 1145570 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0318 13:29:40.517370 1145570 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0318 13:29:40.517393 1145570 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0318 13:29:40.517405 1145570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0318 13:29:40.517431 1145570 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0318 13:29:40.517435 1145570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0318 13:29:40.517502 1145570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0318 13:29:40.517446 1145570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0318 13:29:40.519787 1145570 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0318 13:29:40.526043 1145570 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0318 13:29:40.655245 1145570 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0318 13:29:40.655360 1145570 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0318 13:29:40.659440 1145570 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0318 13:29:40.659551 1145570 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0318 13:29:40.662196 1145570 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0318 13:29:40.662276 1145570 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0318 13:29:41.022810 1145570 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:29:42.993015 1145570 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4: (2.475475067s)
	I0318 13:29:42.993058 1145570 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6: (2.475603297s)
	I0318 13:29:42.993082 1145570 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0318 13:29:42.993081 1145570 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.970243824s)
	I0318 13:29:42.993057 1145570 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4: (2.473237382s)
	I0318 13:29:42.993106 1145570 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0318 13:29:42.993082 1145570 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: (2.330786715s)
	I0318 13:29:42.993159 1145570 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0318 13:29:42.993106 1145570 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0318 13:29:42.993167 1145570 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0318 13:29:42.993061 1145570 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.33767999s)
	I0318 13:29:42.993227 1145570 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0318 13:29:42.993128 1145570 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: (2.333479596s)
	I0318 13:29:42.993238 1145570 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0318 13:29:42.993121 1145570 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0318 13:29:42.993257 1145570 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0318 13:29:42.993334 1145570 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0318 13:29:43.845027 1145570 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0318 13:29:43.845077 1145570 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I0318 13:29:43.845125 1145570 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0318 13:29:43.845132 1145570 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0318 13:29:43.845172 1145570 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0318 13:29:43.994312 1145570 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0318 13:29:43.994378 1145570 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0318 13:29:43.994437 1145570 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0318 13:29:44.742708 1145570 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0318 13:29:44.742768 1145570 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0318 13:29:44.742826 1145570 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0318 13:29:46.799963 1145570 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.05710998s)
	I0318 13:29:46.799991 1145570 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0318 13:29:46.800016 1145570 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0318 13:29:46.800091 1145570 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0318 13:29:47.553945 1145570 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0318 13:29:47.554006 1145570 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0318 13:29:47.554066 1145570 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0318 13:29:48.008966 1145570 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0318 13:29:48.009026 1145570 cache_images.go:123] Successfully loaded all cached images
	I0318 13:29:48.009034 1145570 cache_images.go:92] duration metric: took 7.983889686s to LoadCachedImages
	I0318 13:29:48.009049 1145570 kubeadm.go:928] updating node { 192.168.39.133 8443 v1.24.4 crio true true} ...
	I0318 13:29:48.009215 1145570 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-251198 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-251198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:29:48.009313 1145570 ssh_runner.go:195] Run: crio config
	I0318 13:29:48.057652 1145570 cni.go:84] Creating CNI manager for ""
	I0318 13:29:48.057708 1145570 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:29:48.057728 1145570 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:29:48.057764 1145570 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.133 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-251198 NodeName:test-preload-251198 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.133"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.133 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:29:48.057933 1145570 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.133
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-251198"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.133
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.133"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:29:48.058031 1145570 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0318 13:29:48.068786 1145570 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:29:48.068884 1145570 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:29:48.078933 1145570 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0318 13:29:48.098307 1145570 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:29:48.117267 1145570 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0318 13:29:48.135832 1145570 ssh_runner.go:195] Run: grep 192.168.39.133	control-plane.minikube.internal$ /etc/hosts
	I0318 13:29:48.140101 1145570 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.133	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:29:48.153494 1145570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:29:48.269948 1145570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:29:48.288704 1145570 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198 for IP: 192.168.39.133
	I0318 13:29:48.288734 1145570 certs.go:194] generating shared ca certs ...
	I0318 13:29:48.288757 1145570 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:29:48.288944 1145570 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:29:48.289003 1145570 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:29:48.289017 1145570 certs.go:256] generating profile certs ...
	I0318 13:29:48.289127 1145570 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/client.key
	I0318 13:29:48.289216 1145570 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/apiserver.key.602bb8d2
	I0318 13:29:48.289278 1145570 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/proxy-client.key
	I0318 13:29:48.289463 1145570 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:29:48.289565 1145570 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:29:48.289584 1145570 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:29:48.289613 1145570 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:29:48.289636 1145570 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:29:48.289661 1145570 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:29:48.289708 1145570 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:29:48.290488 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:29:48.346147 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:29:48.384913 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:29:48.436003 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:29:48.470326 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 13:29:48.510398 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:29:48.537412 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:29:48.563844 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:29:48.590459 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:29:48.616427 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:29:48.643739 1145570 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:29:48.670147 1145570 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:29:48.688791 1145570 ssh_runner.go:195] Run: openssl version
	I0318 13:29:48.697182 1145570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:29:48.708970 1145570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:29:48.714254 1145570 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:29:48.714321 1145570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:29:48.720615 1145570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:29:48.732205 1145570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:29:48.744802 1145570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:29:48.750063 1145570 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:29:48.750124 1145570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:29:48.756386 1145570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:29:48.767811 1145570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:29:48.779424 1145570 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:29:48.784761 1145570 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:29:48.784818 1145570 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:29:48.791053 1145570 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:29:48.802482 1145570 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:29:48.807497 1145570 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:29:48.813909 1145570 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:29:48.820218 1145570 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:29:48.826626 1145570 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:29:48.832903 1145570 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:29:48.839330 1145570 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:29:48.845643 1145570 kubeadm.go:391] StartCluster: {Name:test-preload-251198 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-251198 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.133 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:29:48.845752 1145570 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:29:48.845810 1145570 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:29:48.883600 1145570 cri.go:89] found id: ""
	I0318 13:29:48.883680 1145570 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:29:48.894877 1145570 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:29:48.894897 1145570 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:29:48.894903 1145570 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:29:48.894964 1145570 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:29:48.905290 1145570 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:29:48.905727 1145570 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-251198" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:29:48.905863 1145570 kubeconfig.go:62] /home/jenkins/minikube-integration/18429-1106816/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-251198" cluster setting kubeconfig missing "test-preload-251198" context setting]
	I0318 13:29:48.906170 1145570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:29:48.906787 1145570 kapi.go:59] client config for test-preload-251198: &rest.Config{Host:"https://192.168.39.133:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/client.crt", KeyFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/client.key", CAFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:29:48.907448 1145570 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:29:48.917428 1145570 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.133
	I0318 13:29:48.917456 1145570 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:29:48.917466 1145570 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:29:48.917510 1145570 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:29:48.954635 1145570 cri.go:89] found id: ""
	I0318 13:29:48.954712 1145570 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:29:48.971880 1145570 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:29:48.982172 1145570 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:29:48.982188 1145570 kubeadm.go:156] found existing configuration files:
	
	I0318 13:29:48.982228 1145570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:29:48.993139 1145570 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:29:48.993192 1145570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:29:49.003272 1145570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:29:49.012794 1145570 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:29:49.012838 1145570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:29:49.022439 1145570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:29:49.031711 1145570 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:29:49.031751 1145570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:29:49.041413 1145570 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:29:49.051284 1145570 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:29:49.051333 1145570 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:29:49.061176 1145570 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:29:49.070788 1145570 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:29:49.163946 1145570 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:29:49.742057 1145570 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:29:50.030811 1145570 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:29:50.095234 1145570 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:29:50.178425 1145570 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:29:50.178523 1145570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:29:50.678672 1145570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:29:51.178917 1145570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:29:51.200897 1145570 api_server.go:72] duration metric: took 1.022472936s to wait for apiserver process to appear ...
	I0318 13:29:51.200925 1145570 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:29:51.200945 1145570 api_server.go:253] Checking apiserver healthz at https://192.168.39.133:8443/healthz ...
	I0318 13:29:51.201408 1145570 api_server.go:269] stopped: https://192.168.39.133:8443/healthz: Get "https://192.168.39.133:8443/healthz": dial tcp 192.168.39.133:8443: connect: connection refused
	I0318 13:29:51.700987 1145570 api_server.go:253] Checking apiserver healthz at https://192.168.39.133:8443/healthz ...
	I0318 13:29:54.867364 1145570 api_server.go:279] https://192.168.39.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:29:54.867456 1145570 api_server.go:103] status: https://192.168.39.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:29:54.867484 1145570 api_server.go:253] Checking apiserver healthz at https://192.168.39.133:8443/healthz ...
	I0318 13:29:54.902435 1145570 api_server.go:279] https://192.168.39.133:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:29:54.902467 1145570 api_server.go:103] status: https://192.168.39.133:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:29:55.202041 1145570 api_server.go:253] Checking apiserver healthz at https://192.168.39.133:8443/healthz ...
	I0318 13:29:55.207860 1145570 api_server.go:279] https://192.168.39.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0318 13:29:55.207892 1145570 api_server.go:103] status: https://192.168.39.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0318 13:29:55.701312 1145570 api_server.go:253] Checking apiserver healthz at https://192.168.39.133:8443/healthz ...
	I0318 13:29:55.707001 1145570 api_server.go:279] https://192.168.39.133:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0318 13:29:55.707027 1145570 api_server.go:103] status: https://192.168.39.133:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0318 13:29:56.201690 1145570 api_server.go:253] Checking apiserver healthz at https://192.168.39.133:8443/healthz ...
	I0318 13:29:56.210862 1145570 api_server.go:279] https://192.168.39.133:8443/healthz returned 200:
	ok
	I0318 13:29:56.223822 1145570 api_server.go:141] control plane version: v1.24.4
	I0318 13:29:56.223861 1145570 api_server.go:131] duration metric: took 5.022927541s to wait for apiserver health ...
	I0318 13:29:56.223874 1145570 cni.go:84] Creating CNI manager for ""
	I0318 13:29:56.223883 1145570 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:29:56.225546 1145570 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:29:56.226863 1145570 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:29:56.248621 1145570 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:29:56.273712 1145570 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:29:56.296545 1145570 system_pods.go:59] 7 kube-system pods found
	I0318 13:29:56.296578 1145570 system_pods.go:61] "coredns-6d4b75cb6d-hkgt5" [64fb889b-e8fa-4269-9416-cb7520d79b8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:29:56.296583 1145570 system_pods.go:61] "etcd-test-preload-251198" [95b06b02-58a6-490a-81e3-0aef7795c203] Running
	I0318 13:29:56.296589 1145570 system_pods.go:61] "kube-apiserver-test-preload-251198" [e8996426-ff09-4744-8160-daf3e1c9b604] Running
	I0318 13:29:56.296593 1145570 system_pods.go:61] "kube-controller-manager-test-preload-251198" [5dc8eb35-3704-42da-b606-533e4620e5ab] Running
	I0318 13:29:56.296597 1145570 system_pods.go:61] "kube-proxy-tt4vj" [e6c53599-2148-4f78-9504-7639078fa8bf] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0318 13:29:56.296601 1145570 system_pods.go:61] "kube-scheduler-test-preload-251198" [baae5a2c-cd1b-4a4a-87fd-2b5a74c96eb3] Running
	I0318 13:29:56.296606 1145570 system_pods.go:61] "storage-provisioner" [c5a3274c-204b-4051-9d0e-57db1f3a9c6f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:29:56.296612 1145570 system_pods.go:74] duration metric: took 22.88177ms to wait for pod list to return data ...
	I0318 13:29:56.296621 1145570 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:29:56.299837 1145570 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:29:56.299871 1145570 node_conditions.go:123] node cpu capacity is 2
	I0318 13:29:56.299882 1145570 node_conditions.go:105] duration metric: took 3.256666ms to run NodePressure ...
	I0318 13:29:56.299901 1145570 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:29:56.660092 1145570 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:29:56.669172 1145570 retry.go:31] will retry after 346.790431ms: kubelet not initialised
	I0318 13:29:57.024743 1145570 kubeadm.go:733] kubelet initialised
	I0318 13:29:57.024765 1145570 kubeadm.go:734] duration metric: took 364.64001ms waiting for restarted kubelet to initialise ...
	I0318 13:29:57.024773 1145570 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:29:57.029732 1145570 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-hkgt5" in "kube-system" namespace to be "Ready" ...
	I0318 13:29:57.034945 1145570 pod_ready.go:97] node "test-preload-251198" hosting pod "coredns-6d4b75cb6d-hkgt5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.034967 1145570 pod_ready.go:81] duration metric: took 5.213139ms for pod "coredns-6d4b75cb6d-hkgt5" in "kube-system" namespace to be "Ready" ...
	E0318 13:29:57.034975 1145570 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-251198" hosting pod "coredns-6d4b75cb6d-hkgt5" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.034980 1145570 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:29:57.038887 1145570 pod_ready.go:97] node "test-preload-251198" hosting pod "etcd-test-preload-251198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.038906 1145570 pod_ready.go:81] duration metric: took 3.918279ms for pod "etcd-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	E0318 13:29:57.038919 1145570 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-251198" hosting pod "etcd-test-preload-251198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.038924 1145570 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:29:57.043196 1145570 pod_ready.go:97] node "test-preload-251198" hosting pod "kube-apiserver-test-preload-251198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.043222 1145570 pod_ready.go:81] duration metric: took 4.289589ms for pod "kube-apiserver-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	E0318 13:29:57.043232 1145570 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-251198" hosting pod "kube-apiserver-test-preload-251198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.043240 1145570 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:29:57.047491 1145570 pod_ready.go:97] node "test-preload-251198" hosting pod "kube-controller-manager-test-preload-251198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.047509 1145570 pod_ready.go:81] duration metric: took 4.256323ms for pod "kube-controller-manager-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	E0318 13:29:57.047516 1145570 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-251198" hosting pod "kube-controller-manager-test-preload-251198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.047522 1145570 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-tt4vj" in "kube-system" namespace to be "Ready" ...
	I0318 13:29:57.420676 1145570 pod_ready.go:97] node "test-preload-251198" hosting pod "kube-proxy-tt4vj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.420713 1145570 pod_ready.go:81] duration metric: took 373.182171ms for pod "kube-proxy-tt4vj" in "kube-system" namespace to be "Ready" ...
	E0318 13:29:57.420725 1145570 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-251198" hosting pod "kube-proxy-tt4vj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.420735 1145570 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:29:57.820579 1145570 pod_ready.go:97] node "test-preload-251198" hosting pod "kube-scheduler-test-preload-251198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.820612 1145570 pod_ready.go:81] duration metric: took 399.869613ms for pod "kube-scheduler-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	E0318 13:29:57.820623 1145570 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-251198" hosting pod "kube-scheduler-test-preload-251198" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-251198" has status "Ready":"False"
	I0318 13:29:57.820631 1145570 pod_ready.go:38] duration metric: took 795.849986ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:29:57.820651 1145570 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:29:57.834435 1145570 ops.go:34] apiserver oom_adj: -16
	I0318 13:29:57.834461 1145570 kubeadm.go:591] duration metric: took 8.939550815s to restartPrimaryControlPlane
	I0318 13:29:57.834472 1145570 kubeadm.go:393] duration metric: took 8.988840705s to StartCluster
	I0318 13:29:57.834493 1145570 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:29:57.834574 1145570 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:29:57.835286 1145570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:29:57.835569 1145570 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.133 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:29:57.837426 1145570 out.go:177] * Verifying Kubernetes components...
	I0318 13:29:57.835624 1145570 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:29:57.835783 1145570 config.go:182] Loaded profile config "test-preload-251198": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0318 13:29:57.838865 1145570 addons.go:69] Setting storage-provisioner=true in profile "test-preload-251198"
	I0318 13:29:57.838891 1145570 addons.go:69] Setting default-storageclass=true in profile "test-preload-251198"
	I0318 13:29:57.838912 1145570 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:29:57.838930 1145570 addons.go:234] Setting addon storage-provisioner=true in "test-preload-251198"
	W0318 13:29:57.838939 1145570 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:29:57.838970 1145570 host.go:66] Checking if "test-preload-251198" exists ...
	I0318 13:29:57.838925 1145570 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-251198"
	I0318 13:29:57.839297 1145570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:29:57.839332 1145570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:29:57.839414 1145570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:29:57.839449 1145570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:29:57.854176 1145570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36919
	I0318 13:29:57.854722 1145570 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:29:57.855335 1145570 main.go:141] libmachine: Using API Version  1
	I0318 13:29:57.855365 1145570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:29:57.855781 1145570 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:29:57.855901 1145570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35579
	I0318 13:29:57.856244 1145570 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:29:57.856420 1145570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:29:57.856477 1145570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:29:57.856684 1145570 main.go:141] libmachine: Using API Version  1
	I0318 13:29:57.856709 1145570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:29:57.857059 1145570 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:29:57.857291 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetState
	I0318 13:29:57.859812 1145570 kapi.go:59] client config for test-preload-251198: &rest.Config{Host:"https://192.168.39.133:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/client.crt", KeyFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/test-preload-251198/client.key", CAFile:"/home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57de0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0318 13:29:57.860149 1145570 addons.go:234] Setting addon default-storageclass=true in "test-preload-251198"
	W0318 13:29:57.860173 1145570 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:29:57.860211 1145570 host.go:66] Checking if "test-preload-251198" exists ...
	I0318 13:29:57.860614 1145570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:29:57.860660 1145570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:29:57.871930 1145570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37669
	I0318 13:29:57.872421 1145570 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:29:57.872884 1145570 main.go:141] libmachine: Using API Version  1
	I0318 13:29:57.872899 1145570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:29:57.873239 1145570 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:29:57.873524 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetState
	I0318 13:29:57.875322 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	I0318 13:29:57.875530 1145570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38423
	I0318 13:29:57.877384 1145570 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:29:57.875886 1145570 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:29:57.878889 1145570 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:29:57.878911 1145570 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:29:57.878930 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:57.879185 1145570 main.go:141] libmachine: Using API Version  1
	I0318 13:29:57.879210 1145570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:29:57.879614 1145570 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:29:57.880303 1145570 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:29:57.880375 1145570 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:29:57.882021 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:57.882417 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:57.882446 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:57.882669 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHPort
	I0318 13:29:57.882855 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:57.883083 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHUsername
	I0318 13:29:57.883201 1145570 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/test-preload-251198/id_rsa Username:docker}
	I0318 13:29:57.895331 1145570 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44765
	I0318 13:29:57.895821 1145570 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:29:57.896239 1145570 main.go:141] libmachine: Using API Version  1
	I0318 13:29:57.896258 1145570 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:29:57.896638 1145570 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:29:57.896838 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetState
	I0318 13:29:57.898321 1145570 main.go:141] libmachine: (test-preload-251198) Calling .DriverName
	I0318 13:29:57.898609 1145570 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:29:57.898626 1145570 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:29:57.898643 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHHostname
	I0318 13:29:57.901033 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:57.901541 1145570 main.go:141] libmachine: (test-preload-251198) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9e:74:1f", ip: ""} in network mk-test-preload-251198: {Iface:virbr1 ExpiryTime:2024-03-18 14:26:15 +0000 UTC Type:0 Mac:52:54:00:9e:74:1f Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:test-preload-251198 Clientid:01:52:54:00:9e:74:1f}
	I0318 13:29:57.901571 1145570 main.go:141] libmachine: (test-preload-251198) DBG | domain test-preload-251198 has defined IP address 192.168.39.133 and MAC address 52:54:00:9e:74:1f in network mk-test-preload-251198
	I0318 13:29:57.901719 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHPort
	I0318 13:29:57.901920 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHKeyPath
	I0318 13:29:57.902075 1145570 main.go:141] libmachine: (test-preload-251198) Calling .GetSSHUsername
	I0318 13:29:57.902260 1145570 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/test-preload-251198/id_rsa Username:docker}
	I0318 13:29:58.029271 1145570 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:29:58.050163 1145570 node_ready.go:35] waiting up to 6m0s for node "test-preload-251198" to be "Ready" ...
	I0318 13:29:58.106654 1145570 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:29:58.199441 1145570 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:29:58.929250 1145570 main.go:141] libmachine: Making call to close driver server
	I0318 13:29:58.929278 1145570 main.go:141] libmachine: (test-preload-251198) Calling .Close
	I0318 13:29:58.929597 1145570 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:29:58.929619 1145570 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:29:58.929634 1145570 main.go:141] libmachine: Making call to close driver server
	I0318 13:29:58.929626 1145570 main.go:141] libmachine: (test-preload-251198) DBG | Closing plugin on server side
	I0318 13:29:58.929642 1145570 main.go:141] libmachine: (test-preload-251198) Calling .Close
	I0318 13:29:58.929936 1145570 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:29:58.929951 1145570 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:29:58.938505 1145570 main.go:141] libmachine: Making call to close driver server
	I0318 13:29:58.938525 1145570 main.go:141] libmachine: (test-preload-251198) Calling .Close
	I0318 13:29:58.938769 1145570 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:29:58.938790 1145570 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:29:58.971546 1145570 main.go:141] libmachine: Making call to close driver server
	I0318 13:29:58.971575 1145570 main.go:141] libmachine: (test-preload-251198) Calling .Close
	I0318 13:29:58.971888 1145570 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:29:58.971921 1145570 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:29:58.971930 1145570 main.go:141] libmachine: Making call to close driver server
	I0318 13:29:58.971937 1145570 main.go:141] libmachine: (test-preload-251198) Calling .Close
	I0318 13:29:58.971933 1145570 main.go:141] libmachine: (test-preload-251198) DBG | Closing plugin on server side
	I0318 13:29:58.972198 1145570 main.go:141] libmachine: (test-preload-251198) DBG | Closing plugin on server side
	I0318 13:29:58.972172 1145570 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:29:58.972242 1145570 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:29:58.974391 1145570 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0318 13:29:58.975874 1145570 addons.go:505] duration metric: took 1.140259614s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0318 13:30:00.054964 1145570 node_ready.go:53] node "test-preload-251198" has status "Ready":"False"
	I0318 13:30:02.554102 1145570 node_ready.go:53] node "test-preload-251198" has status "Ready":"False"
	I0318 13:30:04.556977 1145570 node_ready.go:53] node "test-preload-251198" has status "Ready":"False"
	I0318 13:30:05.554579 1145570 node_ready.go:49] node "test-preload-251198" has status "Ready":"True"
	I0318 13:30:05.554605 1145570 node_ready.go:38] duration metric: took 7.504410901s for node "test-preload-251198" to be "Ready" ...
	I0318 13:30:05.554614 1145570 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:30:05.559188 1145570 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-hkgt5" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:05.563995 1145570 pod_ready.go:92] pod "coredns-6d4b75cb6d-hkgt5" in "kube-system" namespace has status "Ready":"True"
	I0318 13:30:05.564023 1145570 pod_ready.go:81] duration metric: took 4.81134ms for pod "coredns-6d4b75cb6d-hkgt5" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:05.564034 1145570 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:05.569520 1145570 pod_ready.go:92] pod "etcd-test-preload-251198" in "kube-system" namespace has status "Ready":"True"
	I0318 13:30:05.569544 1145570 pod_ready.go:81] duration metric: took 5.501945ms for pod "etcd-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:05.569555 1145570 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:05.573993 1145570 pod_ready.go:92] pod "kube-apiserver-test-preload-251198" in "kube-system" namespace has status "Ready":"True"
	I0318 13:30:05.574013 1145570 pod_ready.go:81] duration metric: took 4.450549ms for pod "kube-apiserver-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:05.574022 1145570 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:05.579241 1145570 pod_ready.go:92] pod "kube-controller-manager-test-preload-251198" in "kube-system" namespace has status "Ready":"True"
	I0318 13:30:05.579262 1145570 pod_ready.go:81] duration metric: took 5.233683ms for pod "kube-controller-manager-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:05.579273 1145570 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tt4vj" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:05.955332 1145570 pod_ready.go:92] pod "kube-proxy-tt4vj" in "kube-system" namespace has status "Ready":"True"
	I0318 13:30:05.955371 1145570 pod_ready.go:81] duration metric: took 376.082204ms for pod "kube-proxy-tt4vj" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:05.955384 1145570 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:07.961843 1145570 pod_ready.go:102] pod "kube-scheduler-test-preload-251198" in "kube-system" namespace has status "Ready":"False"
	I0318 13:30:08.962105 1145570 pod_ready.go:92] pod "kube-scheduler-test-preload-251198" in "kube-system" namespace has status "Ready":"True"
	I0318 13:30:08.962128 1145570 pod_ready.go:81] duration metric: took 3.006735674s for pod "kube-scheduler-test-preload-251198" in "kube-system" namespace to be "Ready" ...
	I0318 13:30:08.962139 1145570 pod_ready.go:38] duration metric: took 3.407515758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:30:08.962152 1145570 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:30:08.962215 1145570 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:30:08.979073 1145570 api_server.go:72] duration metric: took 11.143473713s to wait for apiserver process to appear ...
	I0318 13:30:08.979098 1145570 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:30:08.979118 1145570 api_server.go:253] Checking apiserver healthz at https://192.168.39.133:8443/healthz ...
	I0318 13:30:08.983785 1145570 api_server.go:279] https://192.168.39.133:8443/healthz returned 200:
	ok
	I0318 13:30:08.984811 1145570 api_server.go:141] control plane version: v1.24.4
	I0318 13:30:08.984835 1145570 api_server.go:131] duration metric: took 5.729452ms to wait for apiserver health ...
	I0318 13:30:08.984843 1145570 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:30:09.001954 1145570 system_pods.go:59] 7 kube-system pods found
	I0318 13:30:09.001992 1145570 system_pods.go:61] "coredns-6d4b75cb6d-hkgt5" [64fb889b-e8fa-4269-9416-cb7520d79b8a] Running
	I0318 13:30:09.002005 1145570 system_pods.go:61] "etcd-test-preload-251198" [95b06b02-58a6-490a-81e3-0aef7795c203] Running
	I0318 13:30:09.002011 1145570 system_pods.go:61] "kube-apiserver-test-preload-251198" [e8996426-ff09-4744-8160-daf3e1c9b604] Running
	I0318 13:30:09.002017 1145570 system_pods.go:61] "kube-controller-manager-test-preload-251198" [5dc8eb35-3704-42da-b606-533e4620e5ab] Running
	I0318 13:30:09.002021 1145570 system_pods.go:61] "kube-proxy-tt4vj" [e6c53599-2148-4f78-9504-7639078fa8bf] Running
	I0318 13:30:09.002029 1145570 system_pods.go:61] "kube-scheduler-test-preload-251198" [baae5a2c-cd1b-4a4a-87fd-2b5a74c96eb3] Running
	I0318 13:30:09.002037 1145570 system_pods.go:61] "storage-provisioner" [c5a3274c-204b-4051-9d0e-57db1f3a9c6f] Running
	I0318 13:30:09.002045 1145570 system_pods.go:74] duration metric: took 17.192754ms to wait for pod list to return data ...
	I0318 13:30:09.002060 1145570 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:30:09.155069 1145570 default_sa.go:45] found service account: "default"
	I0318 13:30:09.155101 1145570 default_sa.go:55] duration metric: took 153.031106ms for default service account to be created ...
	I0318 13:30:09.155111 1145570 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:30:09.357096 1145570 system_pods.go:86] 7 kube-system pods found
	I0318 13:30:09.357128 1145570 system_pods.go:89] "coredns-6d4b75cb6d-hkgt5" [64fb889b-e8fa-4269-9416-cb7520d79b8a] Running
	I0318 13:30:09.357134 1145570 system_pods.go:89] "etcd-test-preload-251198" [95b06b02-58a6-490a-81e3-0aef7795c203] Running
	I0318 13:30:09.357140 1145570 system_pods.go:89] "kube-apiserver-test-preload-251198" [e8996426-ff09-4744-8160-daf3e1c9b604] Running
	I0318 13:30:09.357146 1145570 system_pods.go:89] "kube-controller-manager-test-preload-251198" [5dc8eb35-3704-42da-b606-533e4620e5ab] Running
	I0318 13:30:09.357152 1145570 system_pods.go:89] "kube-proxy-tt4vj" [e6c53599-2148-4f78-9504-7639078fa8bf] Running
	I0318 13:30:09.357157 1145570 system_pods.go:89] "kube-scheduler-test-preload-251198" [baae5a2c-cd1b-4a4a-87fd-2b5a74c96eb3] Running
	I0318 13:30:09.357162 1145570 system_pods.go:89] "storage-provisioner" [c5a3274c-204b-4051-9d0e-57db1f3a9c6f] Running
	I0318 13:30:09.357179 1145570 system_pods.go:126] duration metric: took 202.054081ms to wait for k8s-apps to be running ...
	I0318 13:30:09.357189 1145570 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:30:09.357241 1145570 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:30:09.372669 1145570 system_svc.go:56] duration metric: took 15.473194ms WaitForService to wait for kubelet
	I0318 13:30:09.372696 1145570 kubeadm.go:576] duration metric: took 11.537099873s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:30:09.372715 1145570 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:30:09.555588 1145570 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:30:09.555616 1145570 node_conditions.go:123] node cpu capacity is 2
	I0318 13:30:09.555629 1145570 node_conditions.go:105] duration metric: took 182.9088ms to run NodePressure ...
	I0318 13:30:09.555643 1145570 start.go:240] waiting for startup goroutines ...
	I0318 13:30:09.555651 1145570 start.go:245] waiting for cluster config update ...
	I0318 13:30:09.555664 1145570 start.go:254] writing updated cluster config ...
	I0318 13:30:09.555951 1145570 ssh_runner.go:195] Run: rm -f paused
	I0318 13:30:09.604426 1145570 start.go:600] kubectl: 1.29.3, cluster: 1.24.4 (minor skew: 5)
	I0318 13:30:09.606610 1145570 out.go:177] 
	W0318 13:30:09.608207 1145570 out.go:239] ! /usr/local/bin/kubectl is version 1.29.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0318 13:30:09.609500 1145570 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0318 13:30:09.610692 1145570 out.go:177] * Done! kubectl is now configured to use "test-preload-251198" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.558859886Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768610558834129,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7172d519-cad4-4045-92e0-15c2770c9909 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.559514489Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ac78aeb3-404e-498b-96af-70f557475d6e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.559637144Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ac78aeb3-404e-498b-96af-70f557475d6e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.559820216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3f6e92ce3a4fe3c8bc8875ca553e91eed616851679246b7c15b781fc8afe51b,PodSandboxId:8d3b814577e438b820d665e70108dc957e4e9ffb8ef7443482113dd145f2d733,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710768603597723467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hkgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64fb889b-e8fa-4269-9416-cb7520d79b8a,},Annotations:map[string]string{io.kubernetes.container.hash: a2b787c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaa7bd3af1a2736cec9c1f3a792eecac94fe697b8cf0e42a64107fcf45b291e3,PodSandboxId:6d9ab80c53919afd55158572553cd2670ca5a9e25f2b7fa85f479430ea5acc28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768596527806702,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c5a3274c-204b-4051-9d0e-57db1f3a9c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 166d2f5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6d7e0e9408d639b84301cc4220c8b709fe9b6178714317c0005c2cad7395d3,PodSandboxId:50cd1ce612a395e50e3972f5b5875f40e0dfbd10df28ee39fa1ef428c2c84395,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710768596204074706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tt4vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6
c53599-2148-4f78-9504-7639078fa8bf,},Annotations:map[string]string{io.kubernetes.container.hash: f1f4128e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fd7f6088cb88bd1e0c7be93e47b9440aeb79591eb0e9a1dd2a5289cd46e0a4,PodSandboxId:abf5381a46aecaac54d5564ca153237c593563d37cae0407bb66c968567f7118,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710768591023869254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b85e25f996f5542f2d76786e843880,},Anno
tations:map[string]string{io.kubernetes.container.hash: 5920a651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd35ab384b1dae50d7c292e887fcd4b238453affffe7d3f798f3a40880504b4,PodSandboxId:f8ae3f1754c16a4a3f3a1293de0d7186e77de1ee414a511e4187ebccea41fc18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710768591020966448,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc72a5c08a402110c9a35f398def8e3,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a18efc0d9e8805a58546accabe98c68beb9d9f58ce49879407652c74dac7538,PodSandboxId:e4b81716da27f665bcad718170dd01e618d75deaadb002ec605e64dbef002c2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710768590940287060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57e935f449a70ecaa0df7b4d02424775,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0452d9940fcf165570934ab6fa648f848fbd0d953b4181d2fa6ada7f7a751aa,PodSandboxId:4094f77d2a901c1edc95b497ef3767dd1244a40bfaf49dfb5887fbcea30aee2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710768590917915689,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dde4d5acbaa0ad43821217a7873762c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 59250e13,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ac78aeb3-404e-498b-96af-70f557475d6e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.602489355Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=03ae34b6-51b0-455b-9849-5945957d9b33 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.602653853Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=03ae34b6-51b0-455b-9849-5945957d9b33 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.604159482Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37ce2422-196c-4927-81e5-68b9feae6242 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.604710911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768610604688144,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37ce2422-196c-4927-81e5-68b9feae6242 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.605532220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0a6097c4-0816-42e4-b9a6-452a68971915 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.605657293Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0a6097c4-0816-42e4-b9a6-452a68971915 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.605813276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3f6e92ce3a4fe3c8bc8875ca553e91eed616851679246b7c15b781fc8afe51b,PodSandboxId:8d3b814577e438b820d665e70108dc957e4e9ffb8ef7443482113dd145f2d733,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710768603597723467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hkgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64fb889b-e8fa-4269-9416-cb7520d79b8a,},Annotations:map[string]string{io.kubernetes.container.hash: a2b787c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaa7bd3af1a2736cec9c1f3a792eecac94fe697b8cf0e42a64107fcf45b291e3,PodSandboxId:6d9ab80c53919afd55158572553cd2670ca5a9e25f2b7fa85f479430ea5acc28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768596527806702,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c5a3274c-204b-4051-9d0e-57db1f3a9c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 166d2f5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6d7e0e9408d639b84301cc4220c8b709fe9b6178714317c0005c2cad7395d3,PodSandboxId:50cd1ce612a395e50e3972f5b5875f40e0dfbd10df28ee39fa1ef428c2c84395,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710768596204074706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tt4vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6
c53599-2148-4f78-9504-7639078fa8bf,},Annotations:map[string]string{io.kubernetes.container.hash: f1f4128e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fd7f6088cb88bd1e0c7be93e47b9440aeb79591eb0e9a1dd2a5289cd46e0a4,PodSandboxId:abf5381a46aecaac54d5564ca153237c593563d37cae0407bb66c968567f7118,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710768591023869254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b85e25f996f5542f2d76786e843880,},Anno
tations:map[string]string{io.kubernetes.container.hash: 5920a651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd35ab384b1dae50d7c292e887fcd4b238453affffe7d3f798f3a40880504b4,PodSandboxId:f8ae3f1754c16a4a3f3a1293de0d7186e77de1ee414a511e4187ebccea41fc18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710768591020966448,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc72a5c08a402110c9a35f398def8e3,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a18efc0d9e8805a58546accabe98c68beb9d9f58ce49879407652c74dac7538,PodSandboxId:e4b81716da27f665bcad718170dd01e618d75deaadb002ec605e64dbef002c2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710768590940287060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57e935f449a70ecaa0df7b4d02424775,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0452d9940fcf165570934ab6fa648f848fbd0d953b4181d2fa6ada7f7a751aa,PodSandboxId:4094f77d2a901c1edc95b497ef3767dd1244a40bfaf49dfb5887fbcea30aee2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710768590917915689,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dde4d5acbaa0ad43821217a7873762c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 59250e13,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0a6097c4-0816-42e4-b9a6-452a68971915 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.645275622Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=90bcf0aa-ed4a-49b0-b161-72e3b5cb26ed name=/runtime.v1.RuntimeService/Version
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.645679088Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=90bcf0aa-ed4a-49b0-b161-72e3b5cb26ed name=/runtime.v1.RuntimeService/Version
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.647903723Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0feb087d-861d-411b-9c71-7bbba4490a47 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.648349910Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768610648329648,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0feb087d-861d-411b-9c71-7bbba4490a47 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.649349625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=61a896ed-d5b0-4684-b37f-605a3e87f5c2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.649399212Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=61a896ed-d5b0-4684-b37f-605a3e87f5c2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.649897932Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3f6e92ce3a4fe3c8bc8875ca553e91eed616851679246b7c15b781fc8afe51b,PodSandboxId:8d3b814577e438b820d665e70108dc957e4e9ffb8ef7443482113dd145f2d733,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710768603597723467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hkgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64fb889b-e8fa-4269-9416-cb7520d79b8a,},Annotations:map[string]string{io.kubernetes.container.hash: a2b787c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaa7bd3af1a2736cec9c1f3a792eecac94fe697b8cf0e42a64107fcf45b291e3,PodSandboxId:6d9ab80c53919afd55158572553cd2670ca5a9e25f2b7fa85f479430ea5acc28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768596527806702,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c5a3274c-204b-4051-9d0e-57db1f3a9c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 166d2f5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6d7e0e9408d639b84301cc4220c8b709fe9b6178714317c0005c2cad7395d3,PodSandboxId:50cd1ce612a395e50e3972f5b5875f40e0dfbd10df28ee39fa1ef428c2c84395,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710768596204074706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tt4vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6
c53599-2148-4f78-9504-7639078fa8bf,},Annotations:map[string]string{io.kubernetes.container.hash: f1f4128e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fd7f6088cb88bd1e0c7be93e47b9440aeb79591eb0e9a1dd2a5289cd46e0a4,PodSandboxId:abf5381a46aecaac54d5564ca153237c593563d37cae0407bb66c968567f7118,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710768591023869254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b85e25f996f5542f2d76786e843880,},Anno
tations:map[string]string{io.kubernetes.container.hash: 5920a651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd35ab384b1dae50d7c292e887fcd4b238453affffe7d3f798f3a40880504b4,PodSandboxId:f8ae3f1754c16a4a3f3a1293de0d7186e77de1ee414a511e4187ebccea41fc18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710768591020966448,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc72a5c08a402110c9a35f398def8e3,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a18efc0d9e8805a58546accabe98c68beb9d9f58ce49879407652c74dac7538,PodSandboxId:e4b81716da27f665bcad718170dd01e618d75deaadb002ec605e64dbef002c2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710768590940287060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57e935f449a70ecaa0df7b4d02424775,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0452d9940fcf165570934ab6fa648f848fbd0d953b4181d2fa6ada7f7a751aa,PodSandboxId:4094f77d2a901c1edc95b497ef3767dd1244a40bfaf49dfb5887fbcea30aee2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710768590917915689,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dde4d5acbaa0ad43821217a7873762c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 59250e13,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=61a896ed-d5b0-4684-b37f-605a3e87f5c2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.688644216Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=cd641518-223f-4d9c-8a93-64b8882813ca name=/runtime.v1.RuntimeService/Version
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.688716340Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=cd641518-223f-4d9c-8a93-64b8882813ca name=/runtime.v1.RuntimeService/Version
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.690125179Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9ce408cc-87bd-411c-9663-97b00aebd093 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.690702634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710768610690680527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9ce408cc-87bd-411c-9663-97b00aebd093 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.691763130Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0ca0ee8d-7442-4f15-858c-381a4c640b42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.691847867Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0ca0ee8d-7442-4f15-858c-381a4c640b42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:30:10 test-preload-251198 crio[676]: time="2024-03-18 13:30:10.691999726Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:f3f6e92ce3a4fe3c8bc8875ca553e91eed616851679246b7c15b781fc8afe51b,PodSandboxId:8d3b814577e438b820d665e70108dc957e4e9ffb8ef7443482113dd145f2d733,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1710768603597723467,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-hkgt5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64fb889b-e8fa-4269-9416-cb7520d79b8a,},Annotations:map[string]string{io.kubernetes.container.hash: a2b787c6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pr
otocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aaa7bd3af1a2736cec9c1f3a792eecac94fe697b8cf0e42a64107fcf45b291e3,PodSandboxId:6d9ab80c53919afd55158572553cd2670ca5a9e25f2b7fa85f479430ea5acc28,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710768596527806702,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: c5a3274c-204b-4051-9d0e-57db1f3a9c6f,},Annotations:map[string]string{io.kubernetes.container.hash: 166d2f5b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ab6d7e0e9408d639b84301cc4220c8b709fe9b6178714317c0005c2cad7395d3,PodSandboxId:50cd1ce612a395e50e3972f5b5875f40e0dfbd10df28ee39fa1ef428c2c84395,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1710768596204074706,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-tt4vj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6
c53599-2148-4f78-9504-7639078fa8bf,},Annotations:map[string]string{io.kubernetes.container.hash: f1f4128e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fd7f6088cb88bd1e0c7be93e47b9440aeb79591eb0e9a1dd2a5289cd46e0a4,PodSandboxId:abf5381a46aecaac54d5564ca153237c593563d37cae0407bb66c968567f7118,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1710768591023869254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 01b85e25f996f5542f2d76786e843880,},Anno
tations:map[string]string{io.kubernetes.container.hash: 5920a651,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5cd35ab384b1dae50d7c292e887fcd4b238453affffe7d3f798f3a40880504b4,PodSandboxId:f8ae3f1754c16a4a3f3a1293de0d7186e77de1ee414a511e4187ebccea41fc18,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1710768591020966448,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6fc72a5c08a402110c9a35f398def8e3,},Annotations:map
[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a18efc0d9e8805a58546accabe98c68beb9d9f58ce49879407652c74dac7538,PodSandboxId:e4b81716da27f665bcad718170dd01e618d75deaadb002ec605e64dbef002c2f,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1710768590940287060,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57e935f449a70ecaa0df7b4d02424775,}
,Annotations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0452d9940fcf165570934ab6fa648f848fbd0d953b4181d2fa6ada7f7a751aa,PodSandboxId:4094f77d2a901c1edc95b497ef3767dd1244a40bfaf49dfb5887fbcea30aee2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1710768590917915689,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-251198,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7dde4d5acbaa0ad43821217a7873762c,},Annotation
s:map[string]string{io.kubernetes.container.hash: 59250e13,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0ca0ee8d-7442-4f15-858c-381a4c640b42 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f3f6e92ce3a4f       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   8d3b814577e43       coredns-6d4b75cb6d-hkgt5
	aaa7bd3af1a27       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Running             storage-provisioner       1                   6d9ab80c53919       storage-provisioner
	ab6d7e0e9408d       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   14 seconds ago      Running             kube-proxy                1                   50cd1ce612a39       kube-proxy-tt4vj
	90fd7f6088cb8       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   19 seconds ago      Running             etcd                      1                   abf5381a46aec       etcd-test-preload-251198
	5cd35ab384b1d       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   f8ae3f1754c16       kube-scheduler-test-preload-251198
	0a18efc0d9e88       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   19 seconds ago      Running             kube-controller-manager   1                   e4b81716da27f       kube-controller-manager-test-preload-251198
	a0452d9940fcf       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   19 seconds ago      Running             kube-apiserver            1                   4094f77d2a901       kube-apiserver-test-preload-251198
	
	
	==> coredns [f3f6e92ce3a4fe3c8bc8875ca553e91eed616851679246b7c15b781fc8afe51b] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:33288 - 53774 "HINFO IN 5471314079151400256.1822484454926773490. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.016167019s
	
	
	==> describe nodes <==
	Name:               test-preload-251198
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-251198
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=test-preload-251198
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_28_34_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:28:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-251198
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:30:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:30:05 +0000   Mon, 18 Mar 2024 13:28:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:30:05 +0000   Mon, 18 Mar 2024 13:28:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:30:05 +0000   Mon, 18 Mar 2024 13:28:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:30:05 +0000   Mon, 18 Mar 2024 13:30:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    test-preload-251198
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ac3b6f6a995a4f9faaff9aad48c51b14
	  System UUID:                ac3b6f6a-995a-4f9f-aaff-9aad48c51b14
	  Boot ID:                    055cbcc4-09bc-40f8-9e4e-42e3489798a9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-hkgt5                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     84s
	  kube-system                 etcd-test-preload-251198                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         98s
	  kube-system                 kube-apiserver-test-preload-251198             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-controller-manager-test-preload-251198    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 kube-proxy-tt4vj                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         84s
	  kube-system                 kube-scheduler-test-preload-251198             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 82s                  kube-proxy       
	  Normal  Starting                 14s                  kube-proxy       
	  Normal  Starting                 105s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  104s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  104s (x5 over 105s)  kubelet          Node test-preload-251198 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     104s (x4 over 105s)  kubelet          Node test-preload-251198 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    104s (x5 over 105s)  kubelet          Node test-preload-251198 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  96s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node test-preload-251198 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node test-preload-251198 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node test-preload-251198 status is now: NodeHasSufficientPID
	  Normal  NodeReady                86s                  kubelet          Node test-preload-251198 status is now: NodeReady
	  Normal  RegisteredNode           85s                  node-controller  Node test-preload-251198 event: Registered Node test-preload-251198 in Controller
	  Normal  Starting                 20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20s (x8 over 20s)    kubelet          Node test-preload-251198 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 20s)    kubelet          Node test-preload-251198 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20s (x7 over 20s)    kubelet          Node test-preload-251198 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3s                   node-controller  Node test-preload-251198 event: Registered Node test-preload-251198 in Controller
	
	
	==> dmesg <==
	[Mar18 13:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.054244] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043670] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.597811] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.453956] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.655607] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.290202] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.057055] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065432] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.172856] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.138397] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.256447] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[ +13.291928] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +0.060977] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.690790] systemd-fstab-generator[1061]: Ignoring "noauto" option for root device
	[  +6.201416] kauditd_printk_skb: 105 callbacks suppressed
	[  +1.761641] systemd-fstab-generator[1687]: Ignoring "noauto" option for root device
	[Mar18 13:30] kauditd_printk_skb: 53 callbacks suppressed
	
	
	==> etcd [90fd7f6088cb88bd1e0c7be93e47b9440aeb79591eb0e9a1dd2a5289cd46e0a4] <==
	{"level":"info","ts":"2024-03-18T13:29:51.350Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8c99678d2056b24b","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-03-18T13:29:51.351Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-03-18T13:29:51.353Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c99678d2056b24b switched to configuration voters=(10131242692577243723)"}
	{"level":"info","ts":"2024-03-18T13:29:51.353Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"46bf448cb0ae087f","local-member-id":"8c99678d2056b24b","added-peer-id":"8c99678d2056b24b","added-peer-peer-urls":["https://192.168.39.133:2380"]}
	{"level":"info","ts":"2024-03-18T13:29:51.354Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"46bf448cb0ae087f","local-member-id":"8c99678d2056b24b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:29:51.360Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:29:51.367Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T13:29:51.369Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8c99678d2056b24b","initial-advertise-peer-urls":["https://192.168.39.133:2380"],"listen-peer-urls":["https://192.168.39.133:2380"],"advertise-client-urls":["https://192.168.39.133:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.133:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T13:29:51.369Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T13:29:51.370Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.133:2380"}
	{"level":"info","ts":"2024-03-18T13:29:51.370Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.133:2380"}
	{"level":"info","ts":"2024-03-18T13:29:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c99678d2056b24b is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T13:29:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c99678d2056b24b became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:29:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c99678d2056b24b received MsgPreVoteResp from 8c99678d2056b24b at term 2"}
	{"level":"info","ts":"2024-03-18T13:29:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c99678d2056b24b became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T13:29:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c99678d2056b24b received MsgVoteResp from 8c99678d2056b24b at term 3"}
	{"level":"info","ts":"2024-03-18T13:29:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8c99678d2056b24b became leader at term 3"}
	{"level":"info","ts":"2024-03-18T13:29:52.334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8c99678d2056b24b elected leader 8c99678d2056b24b at term 3"}
	{"level":"info","ts":"2024-03-18T13:29:52.336Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8c99678d2056b24b","local-member-attributes":"{Name:test-preload-251198 ClientURLs:[https://192.168.39.133:2379]}","request-path":"/0/members/8c99678d2056b24b/attributes","cluster-id":"46bf448cb0ae087f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:29:52.336Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:29:52.336Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:29:52.337Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:29:52.343Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.133:2379"}
	{"level":"info","ts":"2024-03-18T13:29:52.346Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:29:52.346Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:30:11 up 0 min,  0 users,  load average: 1.09, 0.28, 0.09
	Linux test-preload-251198 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a0452d9940fcf165570934ab6fa648f848fbd0d953b4181d2fa6ada7f7a751aa] <==
	I0318 13:29:54.860881       1 establishing_controller.go:76] Starting EstablishingController
	I0318 13:29:54.860992       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 13:29:54.861040       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:29:54.861058       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:29:54.865160       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:29:54.884689       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0318 13:29:54.945361       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0318 13:29:54.950178       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:29:54.955737       1 shared_informer.go:262] Caches are synced for crd-autoregister
	E0318 13:29:54.960061       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0318 13:29:54.983845       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0318 13:29:55.017235       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:29:55.027235       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:29:55.027318       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:29:55.029202       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0318 13:29:55.483014       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0318 13:29:55.832626       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:29:56.524793       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0318 13:29:56.549969       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0318 13:29:56.614987       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0318 13:29:56.643445       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:29:56.650085       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:29:56.769187       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0318 13:30:07.988180       1 controller.go:611] quota admission added evaluator for: endpoints
	I0318 13:30:07.992515       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [0a18efc0d9e8805a58546accabe98c68beb9d9f58ce49879407652c74dac7538] <==
	I0318 13:30:07.967041       1 shared_informer.go:262] Caches are synced for stateful set
	I0318 13:30:07.967125       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0318 13:30:07.970511       1 shared_informer.go:262] Caches are synced for cronjob
	I0318 13:30:07.970705       1 shared_informer.go:262] Caches are synced for endpoint
	I0318 13:30:07.972516       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:30:07.972635       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0318 13:30:07.972654       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:30:07.972675       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:30:07.978783       1 shared_informer.go:262] Caches are synced for HPA
	I0318 13:30:07.985657       1 shared_informer.go:262] Caches are synced for persistent volume
	I0318 13:30:07.985807       1 shared_informer.go:262] Caches are synced for taint
	I0318 13:30:07.985904       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0318 13:30:07.985982       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-251198. Assuming now as a timestamp.
	I0318 13:30:07.986012       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0318 13:30:07.986254       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0318 13:30:07.986458       1 event.go:294] "Event occurred" object="test-preload-251198" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-251198 event: Registered Node test-preload-251198 in Controller"
	I0318 13:30:08.016973       1 shared_informer.go:262] Caches are synced for disruption
	I0318 13:30:08.017129       1 disruption.go:371] Sending events to api server.
	I0318 13:30:08.026148       1 shared_informer.go:262] Caches are synced for deployment
	I0318 13:30:08.131528       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 13:30:08.141094       1 shared_informer.go:262] Caches are synced for attach detach
	I0318 13:30:08.159166       1 shared_informer.go:262] Caches are synced for resource quota
	I0318 13:30:08.592521       1 shared_informer.go:262] Caches are synced for garbage collector
	I0318 13:30:08.592630       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0318 13:30:08.608612       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [ab6d7e0e9408d639b84301cc4220c8b709fe9b6178714317c0005c2cad7395d3] <==
	I0318 13:29:56.722609       1 node.go:163] Successfully retrieved node IP: 192.168.39.133
	I0318 13:29:56.722702       1 server_others.go:138] "Detected node IP" address="192.168.39.133"
	I0318 13:29:56.722761       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0318 13:29:56.760031       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0318 13:29:56.760072       1 server_others.go:206] "Using iptables Proxier"
	I0318 13:29:56.760433       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0318 13:29:56.761053       1 server.go:661] "Version info" version="v1.24.4"
	I0318 13:29:56.761091       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:29:56.762234       1 config.go:317] "Starting service config controller"
	I0318 13:29:56.762729       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0318 13:29:56.762786       1 config.go:226] "Starting endpoint slice config controller"
	I0318 13:29:56.762792       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0318 13:29:56.766081       1 config.go:444] "Starting node config controller"
	I0318 13:29:56.766121       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0318 13:29:56.863616       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0318 13:29:56.863714       1 shared_informer.go:262] Caches are synced for service config
	I0318 13:29:56.866249       1 shared_informer.go:262] Caches are synced for node config
	
	
	==> kube-scheduler [5cd35ab384b1dae50d7c292e887fcd4b238453affffe7d3f798f3a40880504b4] <==
	I0318 13:29:51.698392       1 serving.go:348] Generated self-signed cert in-memory
	W0318 13:29:54.870952       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 13:29:54.871315       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:29:54.871439       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 13:29:54.871523       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:29:54.953309       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0318 13:29:54.953359       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:29:54.962387       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:29:54.962478       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:29:54.963426       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0318 13:29:54.963623       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:29:55.063546       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.251070    1068 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnmxw\" (UniqueName: \"kubernetes.io/projected/e6c53599-2148-4f78-9504-7639078fa8bf-kube-api-access-lnmxw\") pod \"kube-proxy-tt4vj\" (UID: \"e6c53599-2148-4f78-9504-7639078fa8bf\") " pod="kube-system/kube-proxy-tt4vj"
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.251089    1068 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxcjb\" (UniqueName: \"kubernetes.io/projected/64fb889b-e8fa-4269-9416-cb7520d79b8a-kube-api-access-kxcjb\") pod \"coredns-6d4b75cb6d-hkgt5\" (UID: \"64fb889b-e8fa-4269-9416-cb7520d79b8a\") " pod="kube-system/coredns-6d4b75cb6d-hkgt5"
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.251106    1068 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c5a3274c-204b-4051-9d0e-57db1f3a9c6f-tmp\") pod \"storage-provisioner\" (UID: \"c5a3274c-204b-4051-9d0e-57db1f3a9c6f\") " pod="kube-system/storage-provisioner"
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.251126    1068 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/64fb889b-e8fa-4269-9416-cb7520d79b8a-config-volume\") pod \"coredns-6d4b75cb6d-hkgt5\" (UID: \"64fb889b-e8fa-4269-9416-cb7520d79b8a\") " pod="kube-system/coredns-6d4b75cb6d-hkgt5"
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.251143    1068 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6c53599-2148-4f78-9504-7639078fa8bf-xtables-lock\") pod \"kube-proxy-tt4vj\" (UID: \"e6c53599-2148-4f78-9504-7639078fa8bf\") " pod="kube-system/kube-proxy-tt4vj"
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.251158    1068 reconciler.go:159] "Reconciler: start to sync state"
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.605823    1068 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b58be232-7b3e-4b00-a4b2-2298f607d76a-config-volume\") pod \"b58be232-7b3e-4b00-a4b2-2298f607d76a\" (UID: \"b58be232-7b3e-4b00-a4b2-2298f607d76a\") "
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.605867    1068 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rgwl\" (UniqueName: \"kubernetes.io/projected/b58be232-7b3e-4b00-a4b2-2298f607d76a-kube-api-access-6rgwl\") pod \"b58be232-7b3e-4b00-a4b2-2298f607d76a\" (UID: \"b58be232-7b3e-4b00-a4b2-2298f607d76a\") "
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: E0318 13:29:55.606863    1068 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: E0318 13:29:55.606957    1068 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/64fb889b-e8fa-4269-9416-cb7520d79b8a-config-volume podName:64fb889b-e8fa-4269-9416-cb7520d79b8a nodeName:}" failed. No retries permitted until 2024-03-18 13:29:56.106932335 +0000 UTC m=+6.077388555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/64fb889b-e8fa-4269-9416-cb7520d79b8a-config-volume") pod "coredns-6d4b75cb6d-hkgt5" (UID: "64fb889b-e8fa-4269-9416-cb7520d79b8a") : object "kube-system"/"coredns" not registered
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: W0318 13:29:55.608619    1068 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/b58be232-7b3e-4b00-a4b2-2298f607d76a/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: W0318 13:29:55.608642    1068 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/b58be232-7b3e-4b00-a4b2-2298f607d76a/volumes/kubernetes.io~projected/kube-api-access-6rgwl: clearQuota called, but quotas disabled
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.608816    1068 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b58be232-7b3e-4b00-a4b2-2298f607d76a-kube-api-access-6rgwl" (OuterVolumeSpecName: "kube-api-access-6rgwl") pod "b58be232-7b3e-4b00-a4b2-2298f607d76a" (UID: "b58be232-7b3e-4b00-a4b2-2298f607d76a"). InnerVolumeSpecName "kube-api-access-6rgwl". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.609202    1068 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b58be232-7b3e-4b00-a4b2-2298f607d76a-config-volume" (OuterVolumeSpecName: "config-volume") pod "b58be232-7b3e-4b00-a4b2-2298f607d76a" (UID: "b58be232-7b3e-4b00-a4b2-2298f607d76a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.706186    1068 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b58be232-7b3e-4b00-a4b2-2298f607d76a-config-volume\") on node \"test-preload-251198\" DevicePath \"\""
	Mar 18 13:29:55 test-preload-251198 kubelet[1068]: I0318 13:29:55.706215    1068 reconciler.go:384] "Volume detached for volume \"kube-api-access-6rgwl\" (UniqueName: \"kubernetes.io/projected/b58be232-7b3e-4b00-a4b2-2298f607d76a-kube-api-access-6rgwl\") on node \"test-preload-251198\" DevicePath \"\""
	Mar 18 13:29:56 test-preload-251198 kubelet[1068]: E0318 13:29:56.110957    1068 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 18 13:29:56 test-preload-251198 kubelet[1068]: E0318 13:29:56.111048    1068 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/64fb889b-e8fa-4269-9416-cb7520d79b8a-config-volume podName:64fb889b-e8fa-4269-9416-cb7520d79b8a nodeName:}" failed. No retries permitted until 2024-03-18 13:29:57.111032865 +0000 UTC m=+7.081489081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/64fb889b-e8fa-4269-9416-cb7520d79b8a-config-volume") pod "coredns-6d4b75cb6d-hkgt5" (UID: "64fb889b-e8fa-4269-9416-cb7520d79b8a") : object "kube-system"/"coredns" not registered
	Mar 18 13:29:56 test-preload-251198 kubelet[1068]: I0318 13:29:56.319850    1068 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b58be232-7b3e-4b00-a4b2-2298f607d76a path="/var/lib/kubelet/pods/b58be232-7b3e-4b00-a4b2-2298f607d76a/volumes"
	Mar 18 13:29:57 test-preload-251198 kubelet[1068]: E0318 13:29:57.119009    1068 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 18 13:29:57 test-preload-251198 kubelet[1068]: E0318 13:29:57.119099    1068 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/64fb889b-e8fa-4269-9416-cb7520d79b8a-config-volume podName:64fb889b-e8fa-4269-9416-cb7520d79b8a nodeName:}" failed. No retries permitted until 2024-03-18 13:29:59.119082787 +0000 UTC m=+9.089539002 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/64fb889b-e8fa-4269-9416-cb7520d79b8a-config-volume") pod "coredns-6d4b75cb6d-hkgt5" (UID: "64fb889b-e8fa-4269-9416-cb7520d79b8a") : object "kube-system"/"coredns" not registered
	Mar 18 13:29:57 test-preload-251198 kubelet[1068]: E0318 13:29:57.301321    1068 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-hkgt5" podUID=64fb889b-e8fa-4269-9416-cb7520d79b8a
	Mar 18 13:29:59 test-preload-251198 kubelet[1068]: E0318 13:29:59.132066    1068 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Mar 18 13:29:59 test-preload-251198 kubelet[1068]: E0318 13:29:59.132191    1068 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/64fb889b-e8fa-4269-9416-cb7520d79b8a-config-volume podName:64fb889b-e8fa-4269-9416-cb7520d79b8a nodeName:}" failed. No retries permitted until 2024-03-18 13:30:03.13217382 +0000 UTC m=+13.102630037 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/64fb889b-e8fa-4269-9416-cb7520d79b8a-config-volume") pod "coredns-6d4b75cb6d-hkgt5" (UID: "64fb889b-e8fa-4269-9416-cb7520d79b8a") : object "kube-system"/"coredns" not registered
	Mar 18 13:29:59 test-preload-251198 kubelet[1068]: E0318 13:29:59.301285    1068 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-hkgt5" podUID=64fb889b-e8fa-4269-9416-cb7520d79b8a
	
	
	==> storage-provisioner [aaa7bd3af1a2736cec9c1f3a792eecac94fe697b8cf0e42a64107fcf45b291e3] <==
	I0318 13:29:56.638734       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-251198 -n test-preload-251198
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-251198 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-251198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-251198
--- FAIL: TestPreload (252.74s)

                                                
                                    
x
+
TestKubernetesUpgrade (432.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-599578 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-599578 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m32.566390186s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-599578] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-599578" primary control-plane node in "kubernetes-upgrade-599578" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:32:10.111457 1146896 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:32:10.111789 1146896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:10.111806 1146896 out.go:304] Setting ErrFile to fd 2...
	I0318 13:32:10.111815 1146896 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:32:10.112019 1146896 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:32:10.112799 1146896 out.go:298] Setting JSON to false
	I0318 13:32:10.114071 1146896 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":18877,"bootTime":1710749853,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:32:10.114136 1146896 start.go:139] virtualization: kvm guest
	I0318 13:32:10.116755 1146896 out.go:177] * [kubernetes-upgrade-599578] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:32:10.119605 1146896 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:32:10.118347 1146896 notify.go:220] Checking for updates...
	I0318 13:32:10.122793 1146896 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:32:10.124242 1146896 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:32:10.126654 1146896 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:32:10.128089 1146896 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:32:10.129685 1146896 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:32:10.131472 1146896 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:32:10.168865 1146896 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 13:32:10.170265 1146896 start.go:297] selected driver: kvm2
	I0318 13:32:10.170295 1146896 start.go:901] validating driver "kvm2" against <nil>
	I0318 13:32:10.170311 1146896 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:32:10.171092 1146896 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:32:10.171238 1146896 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:32:10.186648 1146896 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:32:10.186709 1146896 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:32:10.187031 1146896 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 13:32:10.187102 1146896 cni.go:84] Creating CNI manager for ""
	I0318 13:32:10.187124 1146896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:32:10.187143 1146896 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:32:10.187272 1146896 start.go:340] cluster config:
	{Name:kubernetes-upgrade-599578 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-599578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:32:10.187405 1146896 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:32:10.189901 1146896 out.go:177] * Starting "kubernetes-upgrade-599578" primary control-plane node in "kubernetes-upgrade-599578" cluster
	I0318 13:32:10.191194 1146896 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:32:10.191250 1146896 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 13:32:10.191266 1146896 cache.go:56] Caching tarball of preloaded images
	I0318 13:32:10.191365 1146896 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:32:10.191382 1146896 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 13:32:10.191883 1146896 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/config.json ...
	I0318 13:32:10.191928 1146896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/config.json: {Name:mkf3f4c8936fe21823049ce7370757255570fc72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:32:10.192127 1146896 start.go:360] acquireMachinesLock for kubernetes-upgrade-599578: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:32:10.192167 1146896 start.go:364] duration metric: took 21.592µs to acquireMachinesLock for "kubernetes-upgrade-599578"
	I0318 13:32:10.192191 1146896 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-599578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-599578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:32:10.192264 1146896 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 13:32:10.194025 1146896 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:32:10.194241 1146896 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:32:10.194291 1146896 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:32:10.209640 1146896 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44789
	I0318 13:32:10.210065 1146896 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:32:10.210617 1146896 main.go:141] libmachine: Using API Version  1
	I0318 13:32:10.210645 1146896 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:32:10.211006 1146896 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:32:10.211194 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetMachineName
	I0318 13:32:10.211335 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .DriverName
	I0318 13:32:10.211485 1146896 start.go:159] libmachine.API.Create for "kubernetes-upgrade-599578" (driver="kvm2")
	I0318 13:32:10.211511 1146896 client.go:168] LocalClient.Create starting
	I0318 13:32:10.211536 1146896 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem
	I0318 13:32:10.211578 1146896 main.go:141] libmachine: Decoding PEM data...
	I0318 13:32:10.211600 1146896 main.go:141] libmachine: Parsing certificate...
	I0318 13:32:10.211668 1146896 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem
	I0318 13:32:10.211686 1146896 main.go:141] libmachine: Decoding PEM data...
	I0318 13:32:10.211699 1146896 main.go:141] libmachine: Parsing certificate...
	I0318 13:32:10.211714 1146896 main.go:141] libmachine: Running pre-create checks...
	I0318 13:32:10.211728 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .PreCreateCheck
	I0318 13:32:10.212129 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetConfigRaw
	I0318 13:32:10.212839 1146896 main.go:141] libmachine: Creating machine...
	I0318 13:32:10.212864 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .Create
	I0318 13:32:10.214264 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Creating KVM machine...
	I0318 13:32:10.215397 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found existing default KVM network
	I0318 13:32:10.216284 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:10.216142 1146954 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015aa0}
	I0318 13:32:10.216361 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | created network xml: 
	I0318 13:32:10.216383 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | <network>
	I0318 13:32:10.216403 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG |   <name>mk-kubernetes-upgrade-599578</name>
	I0318 13:32:10.216416 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG |   <dns enable='no'/>
	I0318 13:32:10.216426 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG |   
	I0318 13:32:10.216435 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0318 13:32:10.216492 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG |     <dhcp>
	I0318 13:32:10.216520 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0318 13:32:10.216532 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG |     </dhcp>
	I0318 13:32:10.216547 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG |   </ip>
	I0318 13:32:10.216557 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG |   
	I0318 13:32:10.216565 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | </network>
	I0318 13:32:10.216577 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | 
	I0318 13:32:10.221268 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | trying to create private KVM network mk-kubernetes-upgrade-599578 192.168.39.0/24...
	I0318 13:32:10.291117 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | private KVM network mk-kubernetes-upgrade-599578 192.168.39.0/24 created
	I0318 13:32:10.291145 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:10.291092 1146954 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:32:10.291166 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Setting up store path in /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578 ...
	I0318 13:32:10.291186 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Building disk image from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 13:32:10.291430 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Downloading /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 13:32:10.535114 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:10.534987 1146954 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578/id_rsa...
	I0318 13:32:10.731704 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:10.731558 1146954 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578/kubernetes-upgrade-599578.rawdisk...
	I0318 13:32:10.731740 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Writing magic tar header
	I0318 13:32:10.731757 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Writing SSH key tar header
	I0318 13:32:10.731769 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:10.731680 1146954 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578 ...
	I0318 13:32:10.731793 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578
	I0318 13:32:10.731866 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578 (perms=drwx------)
	I0318 13:32:10.731892 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines
	I0318 13:32:10.731900 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines (perms=drwxr-xr-x)
	I0318 13:32:10.731909 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:32:10.731923 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816
	I0318 13:32:10.731931 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 13:32:10.731941 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Checking permissions on dir: /home/jenkins
	I0318 13:32:10.731949 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Checking permissions on dir: /home
	I0318 13:32:10.731957 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Skipping /home - not owner
	I0318 13:32:10.731987 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube (perms=drwxr-xr-x)
	I0318 13:32:10.732019 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816 (perms=drwxrwxr-x)
	I0318 13:32:10.732039 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 13:32:10.732052 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 13:32:10.732066 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Creating domain...
	I0318 13:32:10.733151 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) define libvirt domain using xml: 
	I0318 13:32:10.733169 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) <domain type='kvm'>
	I0318 13:32:10.733177 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   <name>kubernetes-upgrade-599578</name>
	I0318 13:32:10.733182 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   <memory unit='MiB'>2200</memory>
	I0318 13:32:10.733188 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   <vcpu>2</vcpu>
	I0318 13:32:10.733193 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   <features>
	I0318 13:32:10.733201 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <acpi/>
	I0318 13:32:10.733206 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <apic/>
	I0318 13:32:10.733211 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <pae/>
	I0318 13:32:10.733227 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     
	I0318 13:32:10.733235 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   </features>
	I0318 13:32:10.733243 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   <cpu mode='host-passthrough'>
	I0318 13:32:10.733249 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   
	I0318 13:32:10.733256 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   </cpu>
	I0318 13:32:10.733277 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   <os>
	I0318 13:32:10.733291 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <type>hvm</type>
	I0318 13:32:10.733299 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <boot dev='cdrom'/>
	I0318 13:32:10.733304 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <boot dev='hd'/>
	I0318 13:32:10.733313 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <bootmenu enable='no'/>
	I0318 13:32:10.733320 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   </os>
	I0318 13:32:10.733326 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   <devices>
	I0318 13:32:10.733338 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <disk type='file' device='cdrom'>
	I0318 13:32:10.733350 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578/boot2docker.iso'/>
	I0318 13:32:10.733359 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <target dev='hdc' bus='scsi'/>
	I0318 13:32:10.733401 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <readonly/>
	I0318 13:32:10.733428 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     </disk>
	I0318 13:32:10.733454 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <disk type='file' device='disk'>
	I0318 13:32:10.733477 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 13:32:10.733496 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578/kubernetes-upgrade-599578.rawdisk'/>
	I0318 13:32:10.733505 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <target dev='hda' bus='virtio'/>
	I0318 13:32:10.733511 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     </disk>
	I0318 13:32:10.733519 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <interface type='network'>
	I0318 13:32:10.733525 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <source network='mk-kubernetes-upgrade-599578'/>
	I0318 13:32:10.733536 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <model type='virtio'/>
	I0318 13:32:10.733549 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     </interface>
	I0318 13:32:10.733563 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <interface type='network'>
	I0318 13:32:10.733577 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <source network='default'/>
	I0318 13:32:10.733589 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <model type='virtio'/>
	I0318 13:32:10.733599 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     </interface>
	I0318 13:32:10.733606 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <serial type='pty'>
	I0318 13:32:10.733612 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <target port='0'/>
	I0318 13:32:10.733621 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     </serial>
	I0318 13:32:10.733633 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <console type='pty'>
	I0318 13:32:10.733650 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <target type='serial' port='0'/>
	I0318 13:32:10.733663 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     </console>
	I0318 13:32:10.733674 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     <rng model='virtio'>
	I0318 13:32:10.733688 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)       <backend model='random'>/dev/random</backend>
	I0318 13:32:10.733697 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     </rng>
	I0318 13:32:10.733705 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     
	I0318 13:32:10.733716 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)     
	I0318 13:32:10.733730 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578)   </devices>
	I0318 13:32:10.733744 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) </domain>
	I0318 13:32:10.733768 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) 
	I0318 13:32:10.737925 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:aa:28:3b in network default
	I0318 13:32:10.738507 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Ensuring networks are active...
	I0318 13:32:10.738522 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:10.739210 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Ensuring network default is active
	I0318 13:32:10.739628 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Ensuring network mk-kubernetes-upgrade-599578 is active
	I0318 13:32:10.740156 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Getting domain xml...
	I0318 13:32:10.740835 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Creating domain...
	I0318 13:32:11.977136 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Waiting to get IP...
	I0318 13:32:11.978047 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:11.978453 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:11.978513 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:11.978444 1146954 retry.go:31] will retry after 304.595278ms: waiting for machine to come up
	I0318 13:32:12.285074 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:12.285558 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:12.285590 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:12.285512 1146954 retry.go:31] will retry after 243.327856ms: waiting for machine to come up
	I0318 13:32:12.530987 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:12.531470 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:12.531505 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:12.531422 1146954 retry.go:31] will retry after 394.959911ms: waiting for machine to come up
	I0318 13:32:12.928188 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:12.928667 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:12.928707 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:12.928618 1146954 retry.go:31] will retry after 453.828505ms: waiting for machine to come up
	I0318 13:32:13.384203 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:13.384722 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:13.384755 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:13.384675 1146954 retry.go:31] will retry after 629.98624ms: waiting for machine to come up
	I0318 13:32:14.016639 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:14.017032 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:14.017057 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:14.016984 1146954 retry.go:31] will retry after 766.230315ms: waiting for machine to come up
	I0318 13:32:14.784776 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:14.785227 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:14.785260 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:14.785174 1146954 retry.go:31] will retry after 813.032279ms: waiting for machine to come up
	I0318 13:32:15.599455 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:15.599856 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:15.599895 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:15.599808 1146954 retry.go:31] will retry after 900.19842ms: waiting for machine to come up
	I0318 13:32:16.501600 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:16.502067 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:16.502096 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:16.502001 1146954 retry.go:31] will retry after 1.336832385s: waiting for machine to come up
	I0318 13:32:17.840519 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:17.840941 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:17.840971 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:17.840891 1146954 retry.go:31] will retry after 1.714842756s: waiting for machine to come up
	I0318 13:32:19.557895 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:19.558376 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:19.558409 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:19.558328 1146954 retry.go:31] will retry after 2.58362854s: waiting for machine to come up
	I0318 13:32:22.144748 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:22.145321 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:22.145348 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:22.145271 1146954 retry.go:31] will retry after 2.720648283s: waiting for machine to come up
	I0318 13:32:24.869149 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:24.869549 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:24.869628 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:24.869524 1146954 retry.go:31] will retry after 3.368292042s: waiting for machine to come up
	I0318 13:32:28.241279 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:28.241651 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find current IP address of domain kubernetes-upgrade-599578 in network mk-kubernetes-upgrade-599578
	I0318 13:32:28.241677 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | I0318 13:32:28.241605 1146954 retry.go:31] will retry after 3.466108992s: waiting for machine to come up
	I0318 13:32:31.709379 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:31.709818 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Found IP for machine: 192.168.39.167
	I0318 13:32:31.709843 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has current primary IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:31.709851 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Reserving static IP address...
	I0318 13:32:31.710184 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-599578", mac: "52:54:00:cd:18:d6", ip: "192.168.39.167"} in network mk-kubernetes-upgrade-599578
	I0318 13:32:31.788406 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Getting to WaitForSSH function...
	I0318 13:32:31.788447 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Reserved static IP address: 192.168.39.167
	I0318 13:32:31.788468 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Waiting for SSH to be available...
	I0318 13:32:31.791161 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:31.791577 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:minikube Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:31.791617 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:31.791808 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Using SSH client type: external
	I0318 13:32:31.791825 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578/id_rsa (-rw-------)
	I0318 13:32:31.791905 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.167 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:32:31.791939 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | About to run SSH command:
	I0318 13:32:31.791964 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | exit 0
	I0318 13:32:31.920433 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | SSH cmd err, output: <nil>: 
	I0318 13:32:31.920728 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) KVM machine creation complete!
	I0318 13:32:31.921038 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetConfigRaw
	I0318 13:32:31.921579 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .DriverName
	I0318 13:32:31.921750 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .DriverName
	I0318 13:32:31.921905 1146896 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 13:32:31.921917 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetState
	I0318 13:32:31.923068 1146896 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 13:32:31.923083 1146896 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 13:32:31.923092 1146896 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 13:32:31.923100 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHHostname
	I0318 13:32:31.925343 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:31.925701 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:31.925731 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:31.925886 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHPort
	I0318 13:32:31.926077 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:31.926251 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:31.926407 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHUsername
	I0318 13:32:31.926591 1146896 main.go:141] libmachine: Using SSH client type: native
	I0318 13:32:31.926855 1146896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0318 13:32:31.926874 1146896 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 13:32:32.027824 1146896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:32:32.027856 1146896 main.go:141] libmachine: Detecting the provisioner...
	I0318 13:32:32.027863 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHHostname
	I0318 13:32:32.030491 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.030825 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:32.030862 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.031035 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHPort
	I0318 13:32:32.031236 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:32.031390 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:32.031526 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHUsername
	I0318 13:32:32.031681 1146896 main.go:141] libmachine: Using SSH client type: native
	I0318 13:32:32.031869 1146896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0318 13:32:32.031883 1146896 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 13:32:32.133548 1146896 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 13:32:32.133657 1146896 main.go:141] libmachine: found compatible host: buildroot
	I0318 13:32:32.133670 1146896 main.go:141] libmachine: Provisioning with buildroot...
	I0318 13:32:32.133681 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetMachineName
	I0318 13:32:32.133913 1146896 buildroot.go:166] provisioning hostname "kubernetes-upgrade-599578"
	I0318 13:32:32.133938 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetMachineName
	I0318 13:32:32.134156 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHHostname
	I0318 13:32:32.136618 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.137040 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:32.137070 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.137256 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHPort
	I0318 13:32:32.137450 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:32.137689 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:32.137916 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHUsername
	I0318 13:32:32.138109 1146896 main.go:141] libmachine: Using SSH client type: native
	I0318 13:32:32.138337 1146896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0318 13:32:32.138364 1146896 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-599578 && echo "kubernetes-upgrade-599578" | sudo tee /etc/hostname
	I0318 13:32:32.258315 1146896 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-599578
	
	I0318 13:32:32.258342 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHHostname
	I0318 13:32:32.261033 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.261477 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:32.261512 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.261636 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHPort
	I0318 13:32:32.261835 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:32.261957 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:32.262075 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHUsername
	I0318 13:32:32.262208 1146896 main.go:141] libmachine: Using SSH client type: native
	I0318 13:32:32.262389 1146896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0318 13:32:32.262406 1146896 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-599578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-599578/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-599578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:32:32.376931 1146896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:32:32.377002 1146896 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:32:32.377061 1146896 buildroot.go:174] setting up certificates
	I0318 13:32:32.377076 1146896 provision.go:84] configureAuth start
	I0318 13:32:32.377092 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetMachineName
	I0318 13:32:32.377410 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetIP
	I0318 13:32:32.380061 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.380487 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:32.380520 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.380628 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHHostname
	I0318 13:32:32.382797 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.383169 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:32.383212 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.383312 1146896 provision.go:143] copyHostCerts
	I0318 13:32:32.383390 1146896 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:32:32.383436 1146896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:32:32.383514 1146896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:32:32.383631 1146896 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:32:32.383643 1146896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:32:32.383683 1146896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:32:32.383775 1146896 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:32:32.383785 1146896 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:32:32.383820 1146896 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:32:32.383904 1146896 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-599578 san=[127.0.0.1 192.168.39.167 kubernetes-upgrade-599578 localhost minikube]
	I0318 13:32:32.567721 1146896 provision.go:177] copyRemoteCerts
	I0318 13:32:32.567802 1146896 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:32:32.567835 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHHostname
	I0318 13:32:32.570671 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.571032 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:32.571057 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.571223 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHPort
	I0318 13:32:32.571414 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:32.571554 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHUsername
	I0318 13:32:32.571686 1146896 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578/id_rsa Username:docker}
	I0318 13:32:32.653001 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:32:32.681550 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:32:32.707653 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0318 13:32:32.736656 1146896 provision.go:87] duration metric: took 359.565227ms to configureAuth
	I0318 13:32:32.736687 1146896 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:32:32.736931 1146896 config.go:182] Loaded profile config "kubernetes-upgrade-599578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 13:32:32.737042 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHHostname
	I0318 13:32:32.740091 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.740463 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:32.740492 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:32.740664 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHPort
	I0318 13:32:32.740860 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:32.741048 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:32.741157 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHUsername
	I0318 13:32:32.741325 1146896 main.go:141] libmachine: Using SSH client type: native
	I0318 13:32:32.741490 1146896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0318 13:32:32.741507 1146896 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:32:33.022416 1146896 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:32:33.022447 1146896 main.go:141] libmachine: Checking connection to Docker...
	I0318 13:32:33.022456 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetURL
	I0318 13:32:33.023772 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | Using libvirt version 6000000
	I0318 13:32:33.025852 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.026192 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:33.026220 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.026355 1146896 main.go:141] libmachine: Docker is up and running!
	I0318 13:32:33.026371 1146896 main.go:141] libmachine: Reticulating splines...
	I0318 13:32:33.026388 1146896 client.go:171] duration metric: took 22.814869578s to LocalClient.Create
	I0318 13:32:33.026420 1146896 start.go:167] duration metric: took 22.814934223s to libmachine.API.Create "kubernetes-upgrade-599578"
	I0318 13:32:33.026441 1146896 start.go:293] postStartSetup for "kubernetes-upgrade-599578" (driver="kvm2")
	I0318 13:32:33.026457 1146896 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:32:33.026481 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .DriverName
	I0318 13:32:33.026755 1146896 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:32:33.026784 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHHostname
	I0318 13:32:33.028916 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.029203 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:33.029230 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.029409 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHPort
	I0318 13:32:33.029568 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:33.029737 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHUsername
	I0318 13:32:33.029877 1146896 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578/id_rsa Username:docker}
	I0318 13:32:33.115507 1146896 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:32:33.120303 1146896 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:32:33.120340 1146896 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:32:33.120421 1146896 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:32:33.120540 1146896 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:32:33.120640 1146896 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:32:33.130766 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:32:33.156911 1146896 start.go:296] duration metric: took 130.455088ms for postStartSetup
	I0318 13:32:33.156967 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetConfigRaw
	I0318 13:32:33.157545 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetIP
	I0318 13:32:33.160171 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.160509 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:33.160530 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.160764 1146896 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/config.json ...
	I0318 13:32:33.160945 1146896 start.go:128] duration metric: took 22.968671883s to createHost
	I0318 13:32:33.160969 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHHostname
	I0318 13:32:33.163207 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.163554 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:33.163574 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.163699 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHPort
	I0318 13:32:33.163902 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:33.164063 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:33.164217 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHUsername
	I0318 13:32:33.164390 1146896 main.go:141] libmachine: Using SSH client type: native
	I0318 13:32:33.164553 1146896 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.167 22 <nil> <nil>}
	I0318 13:32:33.164566 1146896 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 13:32:33.269885 1146896 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710768753.236237745
	
	I0318 13:32:33.269912 1146896 fix.go:216] guest clock: 1710768753.236237745
	I0318 13:32:33.269925 1146896 fix.go:229] Guest: 2024-03-18 13:32:33.236237745 +0000 UTC Remote: 2024-03-18 13:32:33.160959135 +0000 UTC m=+23.109778656 (delta=75.27861ms)
	I0318 13:32:33.269976 1146896 fix.go:200] guest clock delta is within tolerance: 75.27861ms
	I0318 13:32:33.269988 1146896 start.go:83] releasing machines lock for "kubernetes-upgrade-599578", held for 23.07780791s
	I0318 13:32:33.270025 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .DriverName
	I0318 13:32:33.270406 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetIP
	I0318 13:32:33.273633 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.274032 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:33.274061 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.274382 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .DriverName
	I0318 13:32:33.274921 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .DriverName
	I0318 13:32:33.275125 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .DriverName
	I0318 13:32:33.275241 1146896 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:32:33.275310 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHHostname
	I0318 13:32:33.275552 1146896 ssh_runner.go:195] Run: cat /version.json
	I0318 13:32:33.275584 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHHostname
	I0318 13:32:33.278752 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.278941 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.279170 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:33.279198 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.279277 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:33.279306 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:33.279336 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHPort
	I0318 13:32:33.279501 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHPort
	I0318 13:32:33.279566 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:33.279628 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHKeyPath
	I0318 13:32:33.279711 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHUsername
	I0318 13:32:33.279769 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetSSHUsername
	I0318 13:32:33.279855 1146896 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578/id_rsa Username:docker}
	I0318 13:32:33.279910 1146896 sshutil.go:53] new ssh client: &{IP:192.168.39.167 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kubernetes-upgrade-599578/id_rsa Username:docker}
	I0318 13:32:33.392421 1146896 ssh_runner.go:195] Run: systemctl --version
	I0318 13:32:33.399499 1146896 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:32:33.576825 1146896 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:32:33.583706 1146896 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:32:33.583779 1146896 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:32:33.604614 1146896 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:32:33.604639 1146896 start.go:494] detecting cgroup driver to use...
	I0318 13:32:33.604696 1146896 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:32:33.626478 1146896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:32:33.643155 1146896 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:32:33.643215 1146896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:32:33.657707 1146896 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:32:33.672044 1146896 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:32:33.809164 1146896 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:32:33.980785 1146896 docker.go:233] disabling docker service ...
	I0318 13:32:33.980843 1146896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:32:34.003551 1146896 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:32:34.017271 1146896 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:32:34.161263 1146896 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:32:34.297320 1146896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:32:34.315944 1146896 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:32:34.339419 1146896 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 13:32:34.339473 1146896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:32:34.351247 1146896 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:32:34.351314 1146896 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:32:34.362653 1146896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:32:34.373922 1146896 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:32:34.385420 1146896 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:32:34.397029 1146896 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:32:34.407492 1146896 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:32:34.407551 1146896 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:32:34.423859 1146896 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:32:34.434753 1146896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:32:34.578839 1146896 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:32:34.753710 1146896 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:32:34.753791 1146896 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:32:34.759239 1146896 start.go:562] Will wait 60s for crictl version
	I0318 13:32:34.759300 1146896 ssh_runner.go:195] Run: which crictl
	I0318 13:32:34.763447 1146896 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:32:34.805583 1146896 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:32:34.805682 1146896 ssh_runner.go:195] Run: crio --version
	I0318 13:32:34.844090 1146896 ssh_runner.go:195] Run: crio --version
	I0318 13:32:34.881428 1146896 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 13:32:34.882702 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) Calling .GetIP
	I0318 13:32:34.885576 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:34.886034 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cd:18:d6", ip: ""} in network mk-kubernetes-upgrade-599578: {Iface:virbr1 ExpiryTime:2024-03-18 14:32:26 +0000 UTC Type:0 Mac:52:54:00:cd:18:d6 Iaid: IPaddr:192.168.39.167 Prefix:24 Hostname:kubernetes-upgrade-599578 Clientid:01:52:54:00:cd:18:d6}
	I0318 13:32:34.886059 1146896 main.go:141] libmachine: (kubernetes-upgrade-599578) DBG | domain kubernetes-upgrade-599578 has defined IP address 192.168.39.167 and MAC address 52:54:00:cd:18:d6 in network mk-kubernetes-upgrade-599578
	I0318 13:32:34.886439 1146896 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:32:34.893541 1146896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:32:34.908811 1146896 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-599578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-599578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:32:34.908913 1146896 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:32:34.908955 1146896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:32:34.948106 1146896 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:32:34.948185 1146896 ssh_runner.go:195] Run: which lz4
	I0318 13:32:34.953875 1146896 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 13:32:34.961769 1146896 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:32:34.961811 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 13:32:37.132531 1146896 crio.go:444] duration metric: took 2.178696001s to copy over tarball
	I0318 13:32:37.132627 1146896 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:32:40.067398 1146896 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.934731158s)
	I0318 13:32:40.067442 1146896 crio.go:451] duration metric: took 2.934874199s to extract the tarball
	I0318 13:32:40.067452 1146896 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:32:40.112271 1146896 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:32:40.250069 1146896 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:32:40.250099 1146896 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:32:40.250147 1146896 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:32:40.250165 1146896 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:32:40.250209 1146896 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:32:40.250246 1146896 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:32:40.250300 1146896 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 13:32:40.250378 1146896 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 13:32:40.250197 1146896 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:32:40.250563 1146896 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:32:40.251941 1146896 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:32:40.251971 1146896 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 13:32:40.251981 1146896 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:32:40.251944 1146896 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:32:40.252014 1146896 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:32:40.252028 1146896 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:32:40.251945 1146896 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 13:32:40.251951 1146896 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:32:40.383913 1146896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:32:40.384379 1146896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:32:40.388764 1146896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 13:32:40.395475 1146896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 13:32:40.416611 1146896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:32:40.418733 1146896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 13:32:40.422530 1146896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:32:40.530792 1146896 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 13:32:40.530844 1146896 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:32:40.530905 1146896 ssh_runner.go:195] Run: which crictl
	I0318 13:32:40.530968 1146896 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 13:32:40.531011 1146896 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:32:40.531054 1146896 ssh_runner.go:195] Run: which crictl
	I0318 13:32:40.562014 1146896 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 13:32:40.562049 1146896 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 13:32:40.562071 1146896 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:32:40.562082 1146896 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 13:32:40.562127 1146896 ssh_runner.go:195] Run: which crictl
	I0318 13:32:40.562136 1146896 ssh_runner.go:195] Run: which crictl
	I0318 13:32:40.592700 1146896 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 13:32:40.592749 1146896 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:32:40.592781 1146896 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 13:32:40.592802 1146896 ssh_runner.go:195] Run: which crictl
	I0318 13:32:40.592817 1146896 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 13:32:40.592879 1146896 ssh_runner.go:195] Run: which crictl
	I0318 13:32:40.597064 1146896 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 13:32:40.597107 1146896 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:32:40.597125 1146896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:32:40.597148 1146896 ssh_runner.go:195] Run: which crictl
	I0318 13:32:40.597150 1146896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:32:40.597233 1146896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 13:32:40.597243 1146896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 13:32:40.599433 1146896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:32:40.606971 1146896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 13:32:40.745590 1146896 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 13:32:40.745765 1146896 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 13:32:40.745673 1146896 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 13:32:40.745673 1146896 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 13:32:40.745720 1146896 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 13:32:40.745726 1146896 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:32:40.752280 1146896 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 13:32:40.783072 1146896 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 13:32:41.143048 1146896 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:32:41.289312 1146896 cache_images.go:92] duration metric: took 1.03919733s to LoadCachedImages
	W0318 13:32:41.289417 1146896 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0318 13:32:41.289431 1146896 kubeadm.go:928] updating node { 192.168.39.167 8443 v1.20.0 crio true true} ...
	I0318 13:32:41.289581 1146896 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-599578 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.167
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-599578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:32:41.289673 1146896 ssh_runner.go:195] Run: crio config
	I0318 13:32:41.354800 1146896 cni.go:84] Creating CNI manager for ""
	I0318 13:32:41.354826 1146896 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:32:41.354839 1146896 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:32:41.354859 1146896 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.167 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-599578 NodeName:kubernetes-upgrade-599578 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.167"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.167 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 13:32:41.355003 1146896 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.167
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-599578"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.167
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.167"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:32:41.355068 1146896 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 13:32:41.371589 1146896 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:32:41.371670 1146896 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:32:41.384638 1146896 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0318 13:32:41.404682 1146896 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:32:41.425002 1146896 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0318 13:32:41.452157 1146896 ssh_runner.go:195] Run: grep 192.168.39.167	control-plane.minikube.internal$ /etc/hosts
	I0318 13:32:41.456771 1146896 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.167	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:32:41.470569 1146896 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:32:41.606223 1146896 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:32:41.626822 1146896 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578 for IP: 192.168.39.167
	I0318 13:32:41.626853 1146896 certs.go:194] generating shared ca certs ...
	I0318 13:32:41.626876 1146896 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:32:41.627064 1146896 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:32:41.627175 1146896 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:32:41.627197 1146896 certs.go:256] generating profile certs ...
	I0318 13:32:41.627318 1146896 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/client.key
	I0318 13:32:41.627338 1146896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/client.crt with IP's: []
	I0318 13:32:41.749391 1146896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/client.crt ...
	I0318 13:32:41.749423 1146896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/client.crt: {Name:mk4ad4cf710b952b9c9fe20ea85a8b0c337eab2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:32:41.749634 1146896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/client.key ...
	I0318 13:32:41.749654 1146896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/client.key: {Name:mk40ef5ef50a4578e8d2035f944ce7d1a1af42e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:32:41.749773 1146896 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.key.9a9e7365
	I0318 13:32:41.749802 1146896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.crt.9a9e7365 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.167]
	I0318 13:32:41.898336 1146896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.crt.9a9e7365 ...
	I0318 13:32:41.898368 1146896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.crt.9a9e7365: {Name:mka81a220ae670146619fc37172bce6f389be239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:32:41.898569 1146896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.key.9a9e7365 ...
	I0318 13:32:41.898589 1146896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.key.9a9e7365: {Name:mk6107525ac7debb699c395f5e9c416df561c40b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:32:41.898697 1146896 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.crt.9a9e7365 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.crt
	I0318 13:32:41.898830 1146896 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.key.9a9e7365 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.key
	I0318 13:32:41.898920 1146896 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/proxy-client.key
	I0318 13:32:41.898942 1146896 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/proxy-client.crt with IP's: []
	I0318 13:32:42.127445 1146896 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/proxy-client.crt ...
	I0318 13:32:42.127478 1146896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/proxy-client.crt: {Name:mk0044b4d4330d5fb0cf87ed92ed3526a02086a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:32:42.127678 1146896 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/proxy-client.key ...
	I0318 13:32:42.127697 1146896 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/proxy-client.key: {Name:mkc25d750b7b08f79d54504ccd7ec18fa753ad78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:32:42.127979 1146896 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:32:42.128041 1146896 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:32:42.128058 1146896 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:32:42.128093 1146896 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:32:42.128125 1146896 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:32:42.128165 1146896 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:32:42.128226 1146896 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:32:42.129055 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:32:42.158530 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:32:42.186993 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:32:42.215433 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:32:42.243264 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 13:32:42.270225 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:32:42.297207 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:32:42.325420 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:32:42.353042 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:32:42.379840 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:32:42.406952 1146896 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:32:42.434067 1146896 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:32:42.455148 1146896 ssh_runner.go:195] Run: openssl version
	I0318 13:32:42.466889 1146896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:32:42.489436 1146896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:32:42.497864 1146896 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:32:42.497961 1146896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:32:42.511709 1146896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:32:42.527013 1146896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:32:42.549281 1146896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:32:42.556452 1146896 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:32:42.556521 1146896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:32:42.564700 1146896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:32:42.578424 1146896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:32:42.590862 1146896 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:32:42.595745 1146896 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:32:42.595792 1146896 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:32:42.601849 1146896 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:32:42.614152 1146896 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:32:42.618759 1146896 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:32:42.618814 1146896 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-599578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-599578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:32:42.618945 1146896 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:32:42.618991 1146896 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:32:42.661746 1146896 cri.go:89] found id: ""
	I0318 13:32:42.661821 1146896 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 13:32:42.673218 1146896 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:32:42.684320 1146896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:32:42.695567 1146896 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:32:42.695591 1146896 kubeadm.go:156] found existing configuration files:
	
	I0318 13:32:42.695636 1146896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:32:42.706127 1146896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:32:42.706188 1146896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:32:42.716776 1146896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:32:42.727146 1146896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:32:42.727197 1146896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:32:42.738425 1146896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:32:42.749611 1146896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:32:42.749665 1146896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:32:42.760578 1146896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:32:42.770933 1146896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:32:42.770984 1146896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:32:42.781790 1146896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:32:42.900311 1146896 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:32:42.900628 1146896 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:32:43.067734 1146896 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:32:43.067889 1146896 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:32:43.068010 1146896 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:32:43.317988 1146896 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:32:43.320460 1146896 out.go:204]   - Generating certificates and keys ...
	I0318 13:32:43.320563 1146896 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:32:43.320655 1146896 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:32:43.450866 1146896 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 13:32:43.578790 1146896 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 13:32:43.708986 1146896 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 13:32:43.889210 1146896 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 13:32:44.055633 1146896 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 13:32:44.055843 1146896 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-599578 localhost] and IPs [192.168.39.167 127.0.0.1 ::1]
	I0318 13:32:44.218413 1146896 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 13:32:44.218697 1146896 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-599578 localhost] and IPs [192.168.39.167 127.0.0.1 ::1]
	I0318 13:32:44.550647 1146896 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 13:32:45.585081 1146896 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 13:32:45.685575 1146896 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 13:32:45.685880 1146896 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:32:45.829820 1146896 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:32:46.045415 1146896 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:32:46.314588 1146896 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:32:46.552076 1146896 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:32:46.574766 1146896 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:32:46.575990 1146896 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:32:46.576070 1146896 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:32:46.711091 1146896 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:32:46.713213 1146896 out.go:204]   - Booting up control plane ...
	I0318 13:32:46.713355 1146896 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:32:46.723324 1146896 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:32:46.724529 1146896 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:32:46.733435 1146896 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:32:46.739116 1146896 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:33:26.730902 1146896 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:33:26.731019 1146896 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:33:26.731259 1146896 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:33:31.731418 1146896 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:33:31.731667 1146896 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:33:41.730832 1146896 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:33:41.731104 1146896 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:34:01.730569 1146896 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:34:01.730856 1146896 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:34:41.732191 1146896 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:34:41.732486 1146896 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:34:41.732522 1146896 kubeadm.go:309] 
	I0318 13:34:41.732596 1146896 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:34:41.732660 1146896 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:34:41.732674 1146896 kubeadm.go:309] 
	I0318 13:34:41.732720 1146896 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:34:41.732764 1146896 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:34:41.732915 1146896 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:34:41.732925 1146896 kubeadm.go:309] 
	I0318 13:34:41.733067 1146896 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:34:41.733126 1146896 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:34:41.733182 1146896 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:34:41.733197 1146896 kubeadm.go:309] 
	I0318 13:34:41.733390 1146896 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:34:41.733525 1146896 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:34:41.733538 1146896 kubeadm.go:309] 
	I0318 13:34:41.733690 1146896 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:34:41.733820 1146896 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:34:41.733952 1146896 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:34:41.734056 1146896 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:34:41.734070 1146896 kubeadm.go:309] 
	I0318 13:34:41.735073 1146896 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:34:41.735202 1146896 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:34:41.735307 1146896 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 13:34:41.735470 1146896 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-599578 localhost] and IPs [192.168.39.167 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-599578 localhost] and IPs [192.168.39.167 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-599578 localhost] and IPs [192.168.39.167 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-599578 localhost] and IPs [192.168.39.167 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 13:34:41.735529 1146896 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:34:44.686508 1146896 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.95094137s)
	I0318 13:34:44.686593 1146896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:34:44.706226 1146896 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:34:44.721106 1146896 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:34:44.721130 1146896 kubeadm.go:156] found existing configuration files:
	
	I0318 13:34:44.721199 1146896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:34:44.736506 1146896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:34:44.736586 1146896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:34:44.750260 1146896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:34:44.764236 1146896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:34:44.764313 1146896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:34:44.778070 1146896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:34:44.791875 1146896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:34:44.791956 1146896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:34:44.806009 1146896 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:34:44.819649 1146896 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:34:44.819732 1146896 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:34:44.833650 1146896 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:34:45.108599 1146896 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:36:41.774563 1146896 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:36:41.774688 1146896 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 13:36:41.776290 1146896 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:36:41.776417 1146896 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:36:41.776545 1146896 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:36:41.776681 1146896 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:36:41.776815 1146896 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:36:41.776912 1146896 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:36:41.778935 1146896 out.go:204]   - Generating certificates and keys ...
	I0318 13:36:41.779032 1146896 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:36:41.779116 1146896 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:36:41.779213 1146896 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:36:41.779295 1146896 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:36:41.779357 1146896 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:36:41.779400 1146896 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:36:41.779450 1146896 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:36:41.779497 1146896 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:36:41.779554 1146896 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:36:41.779614 1146896 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:36:41.779646 1146896 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:36:41.779692 1146896 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:36:41.779734 1146896 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:36:41.779776 1146896 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:36:41.779825 1146896 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:36:41.779867 1146896 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:36:41.779944 1146896 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:36:41.780009 1146896 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:36:41.780052 1146896 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:36:41.780129 1146896 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:36:41.781998 1146896 out.go:204]   - Booting up control plane ...
	I0318 13:36:41.782118 1146896 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:36:41.782218 1146896 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:36:41.782276 1146896 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:36:41.782342 1146896 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:36:41.782512 1146896 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:36:41.782574 1146896 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:36:41.782672 1146896 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:36:41.782933 1146896 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:36:41.782999 1146896 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:36:41.783204 1146896 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:36:41.783294 1146896 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:36:41.783510 1146896 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:36:41.783568 1146896 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:36:41.783750 1146896 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:36:41.783826 1146896 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:36:41.784058 1146896 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:36:41.784071 1146896 kubeadm.go:309] 
	I0318 13:36:41.784120 1146896 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:36:41.784174 1146896 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:36:41.784185 1146896 kubeadm.go:309] 
	I0318 13:36:41.784223 1146896 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:36:41.784252 1146896 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:36:41.784340 1146896 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:36:41.784348 1146896 kubeadm.go:309] 
	I0318 13:36:41.784438 1146896 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:36:41.784476 1146896 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:36:41.784506 1146896 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:36:41.784513 1146896 kubeadm.go:309] 
	I0318 13:36:41.784596 1146896 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:36:41.784664 1146896 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:36:41.784670 1146896 kubeadm.go:309] 
	I0318 13:36:41.784787 1146896 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:36:41.784898 1146896 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:36:41.784995 1146896 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:36:41.785084 1146896 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:36:41.785113 1146896 kubeadm.go:309] 
	I0318 13:36:41.785166 1146896 kubeadm.go:393] duration metric: took 3m59.166356216s to StartCluster
	I0318 13:36:41.785222 1146896 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:36:41.785276 1146896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:36:41.834099 1146896 cri.go:89] found id: ""
	I0318 13:36:41.834131 1146896 logs.go:276] 0 containers: []
	W0318 13:36:41.834143 1146896 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:36:41.834154 1146896 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:36:41.834224 1146896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:36:41.882480 1146896 cri.go:89] found id: ""
	I0318 13:36:41.882512 1146896 logs.go:276] 0 containers: []
	W0318 13:36:41.882522 1146896 logs.go:278] No container was found matching "etcd"
	I0318 13:36:41.882530 1146896 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:36:41.882592 1146896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:36:41.925081 1146896 cri.go:89] found id: ""
	I0318 13:36:41.925117 1146896 logs.go:276] 0 containers: []
	W0318 13:36:41.925129 1146896 logs.go:278] No container was found matching "coredns"
	I0318 13:36:41.925144 1146896 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:36:41.925210 1146896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:36:41.975523 1146896 cri.go:89] found id: ""
	I0318 13:36:41.975559 1146896 logs.go:276] 0 containers: []
	W0318 13:36:41.975570 1146896 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:36:41.975578 1146896 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:36:41.975664 1146896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:36:42.026076 1146896 cri.go:89] found id: ""
	I0318 13:36:42.026114 1146896 logs.go:276] 0 containers: []
	W0318 13:36:42.026126 1146896 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:36:42.026133 1146896 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:36:42.026205 1146896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:36:42.071010 1146896 cri.go:89] found id: ""
	I0318 13:36:42.071049 1146896 logs.go:276] 0 containers: []
	W0318 13:36:42.071061 1146896 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:36:42.071069 1146896 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:36:42.071139 1146896 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:36:42.111738 1146896 cri.go:89] found id: ""
	I0318 13:36:42.111774 1146896 logs.go:276] 0 containers: []
	W0318 13:36:42.111786 1146896 logs.go:278] No container was found matching "kindnet"
	I0318 13:36:42.111799 1146896 logs.go:123] Gathering logs for kubelet ...
	I0318 13:36:42.111816 1146896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:36:42.170279 1146896 logs.go:123] Gathering logs for dmesg ...
	I0318 13:36:42.170331 1146896 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:36:42.191445 1146896 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:36:42.191479 1146896 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:36:42.327399 1146896 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:36:42.327433 1146896 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:36:42.327466 1146896 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:36:42.468022 1146896 logs.go:123] Gathering logs for container status ...
	I0318 13:36:42.468075 1146896 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 13:36:42.528116 1146896 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 13:36:42.528175 1146896 out.go:239] * 
	* 
	W0318 13:36:42.528250 1146896 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:36:42.528287 1146896 out.go:239] * 
	* 
	W0318 13:36:42.529467 1146896 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:36:42.601154 1146896 out.go:177] 
	W0318 13:36:42.603088 1146896 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:36:42.603168 1146896 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 13:36:42.603230 1146896 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 13:36:42.604664 1146896 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-599578 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-599578
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-599578: (3.355098101s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-599578 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-599578 status --format={{.Host}}: exit status 7 (108.765595ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-599578 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-599578 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m23.430355474s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-599578 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-599578 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-599578 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (97.144132ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-599578] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-599578
	    minikube start -p kubernetes-upgrade-599578 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5995782 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-599578 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-599578 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-599578 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.62062096s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-03-18 13:39:18.341131572 +0000 UTC m=+5035.728045845
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-599578 -n kubernetes-upgrade-599578
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-599578 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-599578 logs -n 25: (1.973273155s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-990886 sudo                  | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo                  | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo cat              | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo cat              | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo                  | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo                  | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo                  | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo find             | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo crio             | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-990886                       | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC | 18 Mar 24 13:35 UTC |
	| start   | -p cert-expiration-537883              | cert-expiration-537883    | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC | 18 Mar 24 13:37 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-375732            | force-systemd-env-375732  | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:36 UTC |
	| start   | -p force-systemd-flag-042940           | force-systemd-flag-042940 | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:37 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-599578           | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:36 UTC |
	| start   | -p kubernetes-upgrade-599578           | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:38 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-042940 ssh cat      | force-systemd-flag-042940 | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC | 18 Mar 24 13:37 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-042940           | force-systemd-flag-042940 | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC | 18 Mar 24 13:37 UTC |
	| start   | -p cert-options-959907                 | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC | 18 Mar 24 13:38 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-760389                        | pause-760389              | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-599578           | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-599578           | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:39 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | cert-options-959907 ssh                | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:38 UTC |
	|         | openssl x509 -text -noout -in          |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt  |                           |         |         |                     |                     |
	| ssh     | -p cert-options-959907 -- sudo         | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:38 UTC |
	|         | cat /etc/kubernetes/admin.conf         |                           |         |         |                     |                     |
	| delete  | -p cert-options-959907                 | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:38 UTC |
	| start   | -p old-k8s-version-909137              | old-k8s-version-909137    | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:38:44
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:38:44.059533 1154224 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:38:44.059658 1154224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:38:44.059669 1154224 out.go:304] Setting ErrFile to fd 2...
	I0318 13:38:44.059674 1154224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:38:44.059849 1154224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:38:44.060526 1154224 out.go:298] Setting JSON to false
	I0318 13:38:44.061596 1154224 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19271,"bootTime":1710749853,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:38:44.061665 1154224 start.go:139] virtualization: kvm guest
	I0318 13:38:44.063825 1154224 out.go:177] * [old-k8s-version-909137] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:38:44.065303 1154224 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:38:44.066580 1154224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:38:44.065340 1154224 notify.go:220] Checking for updates...
	I0318 13:38:44.069130 1154224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:38:44.070522 1154224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:38:44.071641 1154224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:38:44.072776 1154224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:38:44.074305 1154224 config.go:182] Loaded profile config "cert-expiration-537883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:38:44.074430 1154224 config.go:182] Loaded profile config "kubernetes-upgrade-599578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:38:44.074600 1154224 config.go:182] Loaded profile config "pause-760389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:38:44.074726 1154224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:38:44.116708 1154224 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 13:38:44.118080 1154224 start.go:297] selected driver: kvm2
	I0318 13:38:44.118099 1154224 start.go:901] validating driver "kvm2" against <nil>
	I0318 13:38:44.118110 1154224 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:38:44.118820 1154224 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:38:44.118906 1154224 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:38:44.135243 1154224 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:38:44.135291 1154224 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:38:44.135512 1154224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:38:44.135587 1154224 cni.go:84] Creating CNI manager for ""
	I0318 13:38:44.135610 1154224 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:38:44.135623 1154224 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:38:44.135710 1154224 start.go:340] cluster config:
	{Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:38:44.135855 1154224 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:38:44.138585 1154224 out.go:177] * Starting "old-k8s-version-909137" primary control-plane node in "old-k8s-version-909137" cluster
	I0318 13:38:44.139921 1154224 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:38:44.139977 1154224 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 13:38:44.139989 1154224 cache.go:56] Caching tarball of preloaded images
	I0318 13:38:44.140063 1154224 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:38:44.140076 1154224 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 13:38:44.140184 1154224 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json ...
	I0318 13:38:44.140208 1154224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json: {Name:mk778ed3e00301bfc3f00d260272d8c81e783af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:38:44.140400 1154224 start.go:360] acquireMachinesLock for old-k8s-version-909137: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:38:44.140445 1154224 start.go:364] duration metric: took 26.354µs to acquireMachinesLock for "old-k8s-version-909137"
	I0318 13:38:44.140469 1154224 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:38:44.140546 1154224 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 13:38:39.890929 1153835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0318 13:38:39.994493 1153835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:38:40.108536 1153835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:38:40.151852 1153835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/kubernetes-upgrade-599578/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:38:40.189201 1153835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:38:40.221061 1153835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:38:40.257394 1153835 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:38:40.302006 1153835 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:38:40.330360 1153835 ssh_runner.go:195] Run: openssl version
	I0318 13:38:40.339784 1153835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:38:40.359431 1153835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:38:40.367322 1153835 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:38:40.367396 1153835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:38:40.376532 1153835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:38:40.391707 1153835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:38:40.410598 1153835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:38:40.418736 1153835 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:38:40.418879 1153835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:38:40.425999 1153835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:38:40.441362 1153835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:38:40.456133 1153835 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:38:40.462028 1153835 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:38:40.462088 1153835 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:38:40.468946 1153835 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:38:40.482068 1153835 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:38:40.487697 1153835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:38:40.495919 1153835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:38:40.503447 1153835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:38:40.535010 1153835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:38:40.558624 1153835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:38:40.573376 1153835 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:38:40.593233 1153835 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-599578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-599578 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.167 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:38:40.593375 1153835 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:38:40.593479 1153835 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:38:40.681265 1153835 cri.go:89] found id: "fad6e9d2f53ac3020ce0a4397bc7d7cea6ff1d3290e36afeeeb0adcdfc8d1d7f"
	I0318 13:38:40.681294 1153835 cri.go:89] found id: "433673b6385caffeb1e365357ea12204d2eb2ca31ebde5d1289784233cf87106"
	I0318 13:38:40.681300 1153835 cri.go:89] found id: "7b7a94ac13e90a60fac6d854bcfab338c54fcf51c61cf9fb5c00c6c9a6b4667c"
	I0318 13:38:40.681305 1153835 cri.go:89] found id: "70b11ba22c45fb4b2c74b65de802b825d70384a1c92cac8adb36b424324cceff"
	I0318 13:38:40.681309 1153835 cri.go:89] found id: "e83860655248fab28ff88d92626bd49f9802b2970083f1116c50b945b4ef6d63"
	I0318 13:38:40.681312 1153835 cri.go:89] found id: "96da50e7cc7db82a5110c14eccc2d482cee3219f75d3ccc87c95df83b5ee06cc"
	I0318 13:38:40.681316 1153835 cri.go:89] found id: "3451a223480bb2b0813293b9abaa913e8dc994bb2efebea0ae37b3496e98879d"
	I0318 13:38:40.681320 1153835 cri.go:89] found id: "2dcd5c7ccd146fd3fcaa0734f5c8684464505cd4c786d10034a081fcddc6350e"
	I0318 13:38:40.681323 1153835 cri.go:89] found id: "fe5bda29b7801294df4f48e239d33bf60eb80e0152b381b4f75197f73c404cf4"
	I0318 13:38:40.681331 1153835 cri.go:89] found id: "1c9722f93d1b21ad8464daa3c1925f6a6b95dc9cfc67a507f23b0f4e0b2c9f39"
	I0318 13:38:40.681335 1153835 cri.go:89] found id: "9ca53dbc33da78c8a5a1ffb9dfb052ace358fae29acd0caa619072e1dd7d6c7b"
	I0318 13:38:40.681338 1153835 cri.go:89] found id: "319c11ada473f40b240ccd2f546c0b4f0e9511ec6a11e78f5b83d7b9d208f9ba"
	I0318 13:38:40.681343 1153835 cri.go:89] found id: "c144c9a144485d5667ed2bf459b87da39d96e6622e6296755208fdc5a0a9ec6b"
	I0318 13:38:40.681346 1153835 cri.go:89] found id: "df44c53d1a02133ce5b12464b45578a289dd44aaa905a678435018905bf5d151"
	I0318 13:38:40.681351 1153835 cri.go:89] found id: "00590725bbaf005de8291f85fdd0706536d616ed2ad0fb63e93b513aee6bfb03"
	I0318 13:38:40.681355 1153835 cri.go:89] found id: ""
	I0318 13:38:40.681413 1153835 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.202154867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9f469cdc-8560-41ef-b56a-add6e18abf4a name=/runtime.v1.RuntimeService/Version
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.204247888Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=316163fa-8478-410b-8315-6c34f243a1d8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.204935891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769159204900465,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=316163fa-8478-410b-8315-6c34f243a1d8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.205661200Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3405e1d6-4127-4619-8a6c-e57a31bdad2a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.205740551Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3405e1d6-4127-4619-8a6c-e57a31bdad2a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.206403257Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5893652f434736a8d4f973ae96269b992993855d6ade0e3f5a6ac485b0bc7d10,PodSandboxId:3c118bb53b394f852f1b0f54195153f22dc174c15696b2e5ea9c14e949381077,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710769153509825642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rhkvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42ec031-02b9-4c21-8147-b702445ffd7f,},Annotations:map[string]string{io.kubernetes.container.hash: 81f12439,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1acd8017fdaec301bf8f69e0fb68bb34115ea51c0bae7b51abb0f36ab372207,PodSandboxId:5c81e54c74b132eba6a1ae025c5158a4a96e3fe9b89f68b4d200ab56b8f45c6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769153512597277,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: e36c9d35-026f-4a88-9287-13d6e73dd79a,},Annotations:map[string]string{io.kubernetes.container.hash: 1071bcd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a8cd83222df3fe280fcdd67626421057ba90ed96b414467588f53bf63dff28,PodSandboxId:a77ce58dbca70b675964672f4dd573b6a84f7462c9712e154b31336a52417a98,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710769153490864226,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xrl4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 5ec5a675-d4f6-4a56-826a-e5879af02113,},Annotations:map[string]string{io.kubernetes.container.hash: 343e9f92,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f62fccc11a119101aa9007b17da387744e8d92bfc5b04585134ed40741ded7e,PodSandboxId:84526cb440b70ce04b0ec2d1bf152e1173e68eb43daa898760b21c915ab9fd86,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt
:1710769148712652482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 291c3e1069338e2dc64e33198cc81e01,},Annotations:map[string]string{io.kubernetes.container.hash: bdd14276,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8e1b3b0b3b47c104def2ad7a4aeab2333a8eb720010b275c1543d8da575c3fc,PodSandboxId:0168baf63c3763e79bd3eead515f044c1b4bcb65630c950b27692113ec870524,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710769148716390044,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7afdc1c7ec0fdfdfdc46e0934551c,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a227e83abce9d118ceb55dc29094f41f050fe271bf4aa41503efb3cac0f9e661,PodSandboxId:1a764b866f6df5c88a64b72ffa25c31c1593102f4228cde99eb81410883803b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710769148688031384,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 596a38d0613e4857c8aefa6c904d542f,},Annotations:map[string]string{io.kubernetes.container.hash: 67b5ed48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6273e0a259f8224c8ae08bae845f326b6af55bf338ecd89398d56955ca0b7e0b,PodSandboxId:6c2ac781cea46dd12d029473608801ea1f270d80573dab9b4800810ff286cecd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710769148678536611,Label
s:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70a0b1c95da183a10e23705d8b9ce31,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c78bf4575f4668417cbee1a5a02fafe056e10d8ad8a33b4cb5dabb4b60c7d6,PodSandboxId:044b8d9979f1769d338014ed95d199661298ebca5d99737c7a21c8fb8a20a851,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710769119063380
606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8fcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6eec541,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad6e9d2f53ac3020ce0a4397bc7d7cea6ff1d3290e36afeeeb0adcdfc8d1d7f,PodSandboxId:3c118bb53b394f852f1b0f54195153f22dc174c15696b2e5ea9c14e949381077,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710769119809051870,Labels:map[string]string{io.kubern
etes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rhkvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42ec031-02b9-4c21-8147-b702445ffd7f,},Annotations:map[string]string{io.kubernetes.container.hash: 81f12439,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:433673b6385caffeb1e365357ea12204d2eb2ca31ebde5d1289784233cf87106,PodSandboxId:a77ce58dbca70b675964672f4dd573b6a84f7462c9712e154b31336a52417a98,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710769119761693369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xrl4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ec5a675-d4f6-4a56-826a-e5879af02113,},Annotations:map[string]string{io.kubernetes.container.hash: 343e9f92,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7a94ac13e90a60fac6d854bcfab338c54fcf51c61cf9fb5c00c6c9a6b4667c,PodSandboxId:0168baf63c3763e79bd3eead515f044c1b4bcb65630c950b2
7692113ec870524,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710769119197934891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7afdc1c7ec0fdfdfdc46e0934551c,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83860655248fab28ff88d92626bd49f9802b2970083f1116c50b945b4ef6d63,PodSandboxId:1a764b866f6df5c88a64b72ffa25c31c1593102f4228cde99eb8141
0883803b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710769119027557514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 596a38d0613e4857c8aefa6c904d542f,},Annotations:map[string]string{io.kubernetes.container.hash: 67b5ed48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70b11ba22c45fb4b2c74b65de802b825d70384a1c92cac8adb36b424324cceff,PodSandboxId:6c2ac781cea46dd12d029473608801ea1f270d80573dab9b4800810ff286c
ecd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710769119031659870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70a0b1c95da183a10e23705d8b9ce31,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3451a223480bb2b0813293b9abaa913e8dc994bb2efebea0ae37b3496e98879d,PodSandboxId:84526cb440b70ce04b0ec2d1bf152e1173e68eb4
3daa898760b21c915ab9fd86,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710769118794528254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 291c3e1069338e2dc64e33198cc81e01,},Annotations:map[string]string{io.kubernetes.container.hash: bdd14276,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca53dbc33da78c8a5a1ffb9dfb052ace358fae29acd0caa619072e1dd7d6c7b,PodSandboxId:a2b7eb248ac3618e6aac5dcb4c2f585c6ee8204bbf17db56641fbea92e59aae7,Metadata:&C
ontainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710769098639666638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8fcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6eec541,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3405e1d6-4127-4619-8a6c-e57a31bdad2a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.261175237Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36cacaa1-7ef5-4977-a2d8-ea511ee81102 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.261299721Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36cacaa1-7ef5-4977-a2d8-ea511ee81102 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.266490656Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c5888aa5-9b6c-4238-875c-3371c732602b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.267178191Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769159267152446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c5888aa5-9b6c-4238-875c-3371c732602b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.277869422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ae867520-b0b6-4f1b-bb44-d05d6c35fe34 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.278174040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ae867520-b0b6-4f1b-bb44-d05d6c35fe34 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.279873589Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5893652f434736a8d4f973ae96269b992993855d6ade0e3f5a6ac485b0bc7d10,PodSandboxId:3c118bb53b394f852f1b0f54195153f22dc174c15696b2e5ea9c14e949381077,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710769153509825642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rhkvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42ec031-02b9-4c21-8147-b702445ffd7f,},Annotations:map[string]string{io.kubernetes.container.hash: 81f12439,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1acd8017fdaec301bf8f69e0fb68bb34115ea51c0bae7b51abb0f36ab372207,PodSandboxId:5c81e54c74b132eba6a1ae025c5158a4a96e3fe9b89f68b4d200ab56b8f45c6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769153512597277,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: e36c9d35-026f-4a88-9287-13d6e73dd79a,},Annotations:map[string]string{io.kubernetes.container.hash: 1071bcd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a8cd83222df3fe280fcdd67626421057ba90ed96b414467588f53bf63dff28,PodSandboxId:a77ce58dbca70b675964672f4dd573b6a84f7462c9712e154b31336a52417a98,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710769153490864226,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xrl4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 5ec5a675-d4f6-4a56-826a-e5879af02113,},Annotations:map[string]string{io.kubernetes.container.hash: 343e9f92,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f62fccc11a119101aa9007b17da387744e8d92bfc5b04585134ed40741ded7e,PodSandboxId:84526cb440b70ce04b0ec2d1bf152e1173e68eb43daa898760b21c915ab9fd86,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt
:1710769148712652482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 291c3e1069338e2dc64e33198cc81e01,},Annotations:map[string]string{io.kubernetes.container.hash: bdd14276,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8e1b3b0b3b47c104def2ad7a4aeab2333a8eb720010b275c1543d8da575c3fc,PodSandboxId:0168baf63c3763e79bd3eead515f044c1b4bcb65630c950b27692113ec870524,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710769148716390044,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7afdc1c7ec0fdfdfdc46e0934551c,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a227e83abce9d118ceb55dc29094f41f050fe271bf4aa41503efb3cac0f9e661,PodSandboxId:1a764b866f6df5c88a64b72ffa25c31c1593102f4228cde99eb81410883803b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710769148688031384,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 596a38d0613e4857c8aefa6c904d542f,},Annotations:map[string]string{io.kubernetes.container.hash: 67b5ed48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6273e0a259f8224c8ae08bae845f326b6af55bf338ecd89398d56955ca0b7e0b,PodSandboxId:6c2ac781cea46dd12d029473608801ea1f270d80573dab9b4800810ff286cecd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710769148678536611,Label
s:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70a0b1c95da183a10e23705d8b9ce31,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c78bf4575f4668417cbee1a5a02fafe056e10d8ad8a33b4cb5dabb4b60c7d6,PodSandboxId:044b8d9979f1769d338014ed95d199661298ebca5d99737c7a21c8fb8a20a851,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710769119063380
606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8fcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6eec541,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad6e9d2f53ac3020ce0a4397bc7d7cea6ff1d3290e36afeeeb0adcdfc8d1d7f,PodSandboxId:3c118bb53b394f852f1b0f54195153f22dc174c15696b2e5ea9c14e949381077,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710769119809051870,Labels:map[string]string{io.kubern
etes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rhkvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42ec031-02b9-4c21-8147-b702445ffd7f,},Annotations:map[string]string{io.kubernetes.container.hash: 81f12439,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:433673b6385caffeb1e365357ea12204d2eb2ca31ebde5d1289784233cf87106,PodSandboxId:a77ce58dbca70b675964672f4dd573b6a84f7462c9712e154b31336a52417a98,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710769119761693369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xrl4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ec5a675-d4f6-4a56-826a-e5879af02113,},Annotations:map[string]string{io.kubernetes.container.hash: 343e9f92,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7a94ac13e90a60fac6d854bcfab338c54fcf51c61cf9fb5c00c6c9a6b4667c,PodSandboxId:0168baf63c3763e79bd3eead515f044c1b4bcb65630c950b2
7692113ec870524,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710769119197934891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7afdc1c7ec0fdfdfdc46e0934551c,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83860655248fab28ff88d92626bd49f9802b2970083f1116c50b945b4ef6d63,PodSandboxId:1a764b866f6df5c88a64b72ffa25c31c1593102f4228cde99eb8141
0883803b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710769119027557514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 596a38d0613e4857c8aefa6c904d542f,},Annotations:map[string]string{io.kubernetes.container.hash: 67b5ed48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70b11ba22c45fb4b2c74b65de802b825d70384a1c92cac8adb36b424324cceff,PodSandboxId:6c2ac781cea46dd12d029473608801ea1f270d80573dab9b4800810ff286c
ecd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710769119031659870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70a0b1c95da183a10e23705d8b9ce31,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3451a223480bb2b0813293b9abaa913e8dc994bb2efebea0ae37b3496e98879d,PodSandboxId:84526cb440b70ce04b0ec2d1bf152e1173e68eb4
3daa898760b21c915ab9fd86,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710769118794528254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 291c3e1069338e2dc64e33198cc81e01,},Annotations:map[string]string{io.kubernetes.container.hash: bdd14276,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca53dbc33da78c8a5a1ffb9dfb052ace358fae29acd0caa619072e1dd7d6c7b,PodSandboxId:a2b7eb248ac3618e6aac5dcb4c2f585c6ee8204bbf17db56641fbea92e59aae7,Metadata:&C
ontainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710769098639666638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8fcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6eec541,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ae867520-b0b6-4f1b-bb44-d05d6c35fe34 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.327510970Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2defdb05-1e29-40ba-ad78-2e694cbb325e name=/runtime.v1.RuntimeService/Version
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.327615215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2defdb05-1e29-40ba-ad78-2e694cbb325e name=/runtime.v1.RuntimeService/Version
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.329157161Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d338a4b-ec1e-4159-ac4d-15bbf7d6171c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.329502480Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769159329478062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d338a4b-ec1e-4159-ac4d-15bbf7d6171c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.330149376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3af95bee-aa40-42b3-ae87-db0ddc7be4f6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.330247987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3af95bee-aa40-42b3-ae87-db0ddc7be4f6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.330566474Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5893652f434736a8d4f973ae96269b992993855d6ade0e3f5a6ac485b0bc7d10,PodSandboxId:3c118bb53b394f852f1b0f54195153f22dc174c15696b2e5ea9c14e949381077,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710769153509825642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rhkvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42ec031-02b9-4c21-8147-b702445ffd7f,},Annotations:map[string]string{io.kubernetes.container.hash: 81f12439,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1acd8017fdaec301bf8f69e0fb68bb34115ea51c0bae7b51abb0f36ab372207,PodSandboxId:5c81e54c74b132eba6a1ae025c5158a4a96e3fe9b89f68b4d200ab56b8f45c6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769153512597277,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: e36c9d35-026f-4a88-9287-13d6e73dd79a,},Annotations:map[string]string{io.kubernetes.container.hash: 1071bcd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a8cd83222df3fe280fcdd67626421057ba90ed96b414467588f53bf63dff28,PodSandboxId:a77ce58dbca70b675964672f4dd573b6a84f7462c9712e154b31336a52417a98,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710769153490864226,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xrl4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 5ec5a675-d4f6-4a56-826a-e5879af02113,},Annotations:map[string]string{io.kubernetes.container.hash: 343e9f92,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f62fccc11a119101aa9007b17da387744e8d92bfc5b04585134ed40741ded7e,PodSandboxId:84526cb440b70ce04b0ec2d1bf152e1173e68eb43daa898760b21c915ab9fd86,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt
:1710769148712652482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 291c3e1069338e2dc64e33198cc81e01,},Annotations:map[string]string{io.kubernetes.container.hash: bdd14276,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8e1b3b0b3b47c104def2ad7a4aeab2333a8eb720010b275c1543d8da575c3fc,PodSandboxId:0168baf63c3763e79bd3eead515f044c1b4bcb65630c950b27692113ec870524,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710769148716390044,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7afdc1c7ec0fdfdfdc46e0934551c,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a227e83abce9d118ceb55dc29094f41f050fe271bf4aa41503efb3cac0f9e661,PodSandboxId:1a764b866f6df5c88a64b72ffa25c31c1593102f4228cde99eb81410883803b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710769148688031384,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 596a38d0613e4857c8aefa6c904d542f,},Annotations:map[string]string{io.kubernetes.container.hash: 67b5ed48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6273e0a259f8224c8ae08bae845f326b6af55bf338ecd89398d56955ca0b7e0b,PodSandboxId:6c2ac781cea46dd12d029473608801ea1f270d80573dab9b4800810ff286cecd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710769148678536611,Label
s:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70a0b1c95da183a10e23705d8b9ce31,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c78bf4575f4668417cbee1a5a02fafe056e10d8ad8a33b4cb5dabb4b60c7d6,PodSandboxId:044b8d9979f1769d338014ed95d199661298ebca5d99737c7a21c8fb8a20a851,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710769119063380
606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8fcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6eec541,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad6e9d2f53ac3020ce0a4397bc7d7cea6ff1d3290e36afeeeb0adcdfc8d1d7f,PodSandboxId:3c118bb53b394f852f1b0f54195153f22dc174c15696b2e5ea9c14e949381077,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710769119809051870,Labels:map[string]string{io.kubern
etes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rhkvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42ec031-02b9-4c21-8147-b702445ffd7f,},Annotations:map[string]string{io.kubernetes.container.hash: 81f12439,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:433673b6385caffeb1e365357ea12204d2eb2ca31ebde5d1289784233cf87106,PodSandboxId:a77ce58dbca70b675964672f4dd573b6a84f7462c9712e154b31336a52417a98,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710769119761693369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xrl4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ec5a675-d4f6-4a56-826a-e5879af02113,},Annotations:map[string]string{io.kubernetes.container.hash: 343e9f92,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7a94ac13e90a60fac6d854bcfab338c54fcf51c61cf9fb5c00c6c9a6b4667c,PodSandboxId:0168baf63c3763e79bd3eead515f044c1b4bcb65630c950b2
7692113ec870524,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710769119197934891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7afdc1c7ec0fdfdfdc46e0934551c,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83860655248fab28ff88d92626bd49f9802b2970083f1116c50b945b4ef6d63,PodSandboxId:1a764b866f6df5c88a64b72ffa25c31c1593102f4228cde99eb8141
0883803b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710769119027557514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 596a38d0613e4857c8aefa6c904d542f,},Annotations:map[string]string{io.kubernetes.container.hash: 67b5ed48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70b11ba22c45fb4b2c74b65de802b825d70384a1c92cac8adb36b424324cceff,PodSandboxId:6c2ac781cea46dd12d029473608801ea1f270d80573dab9b4800810ff286c
ecd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710769119031659870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70a0b1c95da183a10e23705d8b9ce31,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3451a223480bb2b0813293b9abaa913e8dc994bb2efebea0ae37b3496e98879d,PodSandboxId:84526cb440b70ce04b0ec2d1bf152e1173e68eb4
3daa898760b21c915ab9fd86,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710769118794528254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 291c3e1069338e2dc64e33198cc81e01,},Annotations:map[string]string{io.kubernetes.container.hash: bdd14276,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca53dbc33da78c8a5a1ffb9dfb052ace358fae29acd0caa619072e1dd7d6c7b,PodSandboxId:a2b7eb248ac3618e6aac5dcb4c2f585c6ee8204bbf17db56641fbea92e59aae7,Metadata:&C
ontainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710769098639666638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8fcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6eec541,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3af95bee-aa40-42b3-ae87-db0ddc7be4f6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.492579540Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=f73a3083-b46b-48e8-aad7-8131156a759a name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.493113660Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a77ce58dbca70b675964672f4dd573b6a84f7462c9712e154b31336a52417a98,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-xrl4t,Uid:5ec5a675-d4f6-4a56-826a-e5879af02113,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710769118714610439,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-xrl4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ec5a675-d4f6-4a56-826a-e5879af02113,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:38:18.173531240Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3c118bb53b394f852f1b0f54195153f22dc174c15696b2e5ea9c14e949381077,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-rhkvl,Uid:f42ec031-02b9-4c21-8147-b702445ffd7f,Namespac
e:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710769118655444720,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-rhkvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42ec031-02b9-4c21-8147-b702445ffd7f,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:38:18.136718435Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0168baf63c3763e79bd3eead515f044c1b4bcb65630c950b27692113ec870524,Metadata:&PodSandboxMetadata{Name:kube-scheduler-kubernetes-upgrade-599578,Uid:27e7afdc1c7ec0fdfdfdc46e0934551c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710769118460184095,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7afdc1c7ec0fdfdfdc46e0934551c,tier: control-plane,},Ann
otations:map[string]string{kubernetes.io/config.hash: 27e7afdc1c7ec0fdfdfdc46e0934551c,kubernetes.io/config.seen: 2024-03-18T13:37:57.917005532Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5c81e54c74b132eba6a1ae025c5158a4a96e3fe9b89f68b4d200ab56b8f45c6c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:e36c9d35-026f-4a88-9287-13d6e73dd79a,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710769118427518685,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e36c9d35-026f-4a88-9287-13d6e73dd79a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage
-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T13:38:17.368520753Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:044b8d9979f1769d338014ed95d199661298ebca5d99737c7a21c8fb8a20a851,Metadata:&PodSandboxMetadata{Name:kube-proxy-k8fcw,Uid:ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710769118416367267,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-k8fcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:38:18.072896988Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1a764b866f6df5c88a64b72ffa25c31c1593102f4228cde99eb81410883803b2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-kubernetes-upgrade-599578,Uid:596a38d0613e4857c8aefa6c904d542f,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710769118402974621,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 596a38d0613e4857c8aefa6c904d542f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.167:8443,kubernetes.io/config.hash: 596a38d0613e4857c8aefa6c904d542f,kubernetes.io/config.seen: 2024-03-18T13:37:57.916999148Z,kubernetes.i
o/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6c2ac781cea46dd12d029473608801ea1f270d80573dab9b4800810ff286cecd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-kubernetes-upgrade-599578,Uid:d70a0b1c95da183a10e23705d8b9ce31,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1710769118397669013,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70a0b1c95da183a10e23705d8b9ce31,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d70a0b1c95da183a10e23705d8b9ce31,kubernetes.io/config.seen: 2024-03-18T13:37:57.917004009Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:84526cb440b70ce04b0ec2d1bf152e1173e68eb43daa898760b21c915ab9fd86,Metadata:&PodSandboxMetadata{Name:etcd-kubernetes-upgrade-599578,Uid:291c3e1069338e2dc64e33198cc81e01,Namespace:kube-system,Atte
mpt:1,},State:SANDBOX_READY,CreatedAt:1710769118389195400,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 291c3e1069338e2dc64e33198cc81e01,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.167:2379,kubernetes.io/config.hash: 291c3e1069338e2dc64e33198cc81e01,kubernetes.io/config.seen: 2024-03-18T13:37:57.952697107Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a2b7eb248ac3618e6aac5dcb4c2f585c6ee8204bbf17db56641fbea92e59aae7,Metadata:&PodSandboxMetadata{Name:kube-proxy-k8fcw,Uid:ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710769098384306621,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-k8fcw,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:38:18.072896988Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f73a3083-b46b-48e8-aad7-8131156a759a name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.494566547Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea2a52e2-d4aa-4b8a-8b67-bebc88a8fd98 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.494689671Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea2a52e2-d4aa-4b8a-8b67-bebc88a8fd98 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:39:19 kubernetes-upgrade-599578 crio[2239]: time="2024-03-18 13:39:19.495311576Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:5893652f434736a8d4f973ae96269b992993855d6ade0e3f5a6ac485b0bc7d10,PodSandboxId:3c118bb53b394f852f1b0f54195153f22dc174c15696b2e5ea9c14e949381077,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710769153509825642,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rhkvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42ec031-02b9-4c21-8147-b702445ffd7f,},Annotations:map[string]string{io.kubernetes.container.hash: 81f12439,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":
53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1acd8017fdaec301bf8f69e0fb68bb34115ea51c0bae7b51abb0f36ab372207,PodSandboxId:5c81e54c74b132eba6a1ae025c5158a4a96e3fe9b89f68b4d200ab56b8f45c6c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710769153512597277,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace
: kube-system,io.kubernetes.pod.uid: e36c9d35-026f-4a88-9287-13d6e73dd79a,},Annotations:map[string]string{io.kubernetes.container.hash: 1071bcd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94a8cd83222df3fe280fcdd67626421057ba90ed96b414467588f53bf63dff28,PodSandboxId:a77ce58dbca70b675964672f4dd573b6a84f7462c9712e154b31336a52417a98,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710769153490864226,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xrl4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 5ec5a675-d4f6-4a56-826a-e5879af02113,},Annotations:map[string]string{io.kubernetes.container.hash: 343e9f92,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f62fccc11a119101aa9007b17da387744e8d92bfc5b04585134ed40741ded7e,PodSandboxId:84526cb440b70ce04b0ec2d1bf152e1173e68eb43daa898760b21c915ab9fd86,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt
:1710769148712652482,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 291c3e1069338e2dc64e33198cc81e01,},Annotations:map[string]string{io.kubernetes.container.hash: bdd14276,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8e1b3b0b3b47c104def2ad7a4aeab2333a8eb720010b275c1543d8da575c3fc,PodSandboxId:0168baf63c3763e79bd3eead515f044c1b4bcb65630c950b27692113ec870524,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710769148716390044,Labe
ls:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7afdc1c7ec0fdfdfdc46e0934551c,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a227e83abce9d118ceb55dc29094f41f050fe271bf4aa41503efb3cac0f9e661,PodSandboxId:1a764b866f6df5c88a64b72ffa25c31c1593102f4228cde99eb81410883803b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710769148688031384,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 596a38d0613e4857c8aefa6c904d542f,},Annotations:map[string]string{io.kubernetes.container.hash: 67b5ed48,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6273e0a259f8224c8ae08bae845f326b6af55bf338ecd89398d56955ca0b7e0b,PodSandboxId:6c2ac781cea46dd12d029473608801ea1f270d80573dab9b4800810ff286cecd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710769148678536611,Label
s:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70a0b1c95da183a10e23705d8b9ce31,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50c78bf4575f4668417cbee1a5a02fafe056e10d8ad8a33b4cb5dabb4b60c7d6,PodSandboxId:044b8d9979f1769d338014ed95d199661298ebca5d99737c7a21c8fb8a20a851,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710769119063380
606,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8fcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6eec541,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fad6e9d2f53ac3020ce0a4397bc7d7cea6ff1d3290e36afeeeb0adcdfc8d1d7f,PodSandboxId:3c118bb53b394f852f1b0f54195153f22dc174c15696b2e5ea9c14e949381077,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710769119809051870,Labels:map[string]string{io.kubern
etes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rhkvl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f42ec031-02b9-4c21-8147-b702445ffd7f,},Annotations:map[string]string{io.kubernetes.container.hash: 81f12439,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:433673b6385caffeb1e365357ea12204d2eb2ca31ebde5d1289784233cf87106,PodSandboxId:a77ce58dbca70b675964672f4dd573b6a84f7462c9712e154b31336a52417a98,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserS
pecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710769119761693369,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-xrl4t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5ec5a675-d4f6-4a56-826a-e5879af02113,},Annotations:map[string]string{io.kubernetes.container.hash: 343e9f92,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b7a94ac13e90a60fac6d854bcfab338c54fcf51c61cf9fb5c00c6c9a6b4667c,PodSandboxId:0168baf63c3763e79bd3eead515f044c1b4bcb65630c950b2
7692113ec870524,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710769119197934891,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 27e7afdc1c7ec0fdfdfdc46e0934551c,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e83860655248fab28ff88d92626bd49f9802b2970083f1116c50b945b4ef6d63,PodSandboxId:1a764b866f6df5c88a64b72ffa25c31c1593102f4228cde99eb8141
0883803b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710769119027557514,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 596a38d0613e4857c8aefa6c904d542f,},Annotations:map[string]string{io.kubernetes.container.hash: 67b5ed48,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70b11ba22c45fb4b2c74b65de802b825d70384a1c92cac8adb36b424324cceff,PodSandboxId:6c2ac781cea46dd12d029473608801ea1f270d80573dab9b4800810ff286c
ecd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710769119031659870,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d70a0b1c95da183a10e23705d8b9ce31,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3451a223480bb2b0813293b9abaa913e8dc994bb2efebea0ae37b3496e98879d,PodSandboxId:84526cb440b70ce04b0ec2d1bf152e1173e68eb4
3daa898760b21c915ab9fd86,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710769118794528254,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-599578,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 291c3e1069338e2dc64e33198cc81e01,},Annotations:map[string]string{io.kubernetes.container.hash: bdd14276,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ca53dbc33da78c8a5a1ffb9dfb052ace358fae29acd0caa619072e1dd7d6c7b,PodSandboxId:a2b7eb248ac3618e6aac5dcb4c2f585c6ee8204bbf17db56641fbea92e59aae7,Metadata:&C
ontainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710769098639666638,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k8fcw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9,},Annotations:map[string]string{io.kubernetes.container.hash: 6eec541,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ea2a52e2-d4aa-4b8a-8b67-bebc88a8fd98 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	b1acd8017fdae       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 seconds ago        Exited              storage-provisioner       2                   5c81e54c74b13       storage-provisioner
	5893652f43473       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   6 seconds ago        Running             coredns                   2                   3c118bb53b394       coredns-76f75df574-rhkvl
	94a8cd83222df       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   6 seconds ago        Running             coredns                   2                   a77ce58dbca70       coredns-76f75df574-xrl4t
	a8e1b3b0b3b47       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   10 seconds ago       Running             kube-scheduler            2                   0168baf63c376       kube-scheduler-kubernetes-upgrade-599578
	7f62fccc11a11       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   10 seconds ago       Running             etcd                      2                   84526cb440b70       etcd-kubernetes-upgrade-599578
	a227e83abce9d       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   10 seconds ago       Running             kube-apiserver            2                   1a764b866f6df       kube-apiserver-kubernetes-upgrade-599578
	6273e0a259f82       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   10 seconds ago       Running             kube-controller-manager   2                   6c2ac781cea46       kube-controller-manager-kubernetes-upgrade-599578
	fad6e9d2f53ac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   39 seconds ago       Exited              coredns                   1                   3c118bb53b394       coredns-76f75df574-rhkvl
	433673b6385ca       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   39 seconds ago       Exited              coredns                   1                   a77ce58dbca70       coredns-76f75df574-xrl4t
	7b7a94ac13e90       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   40 seconds ago       Exited              kube-scheduler            1                   0168baf63c376       kube-scheduler-kubernetes-upgrade-599578
	50c78bf4575f4       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   40 seconds ago       Running             kube-proxy                1                   044b8d9979f17       kube-proxy-k8fcw
	70b11ba22c45f       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   40 seconds ago       Exited              kube-controller-manager   1                   6c2ac781cea46       kube-controller-manager-kubernetes-upgrade-599578
	e83860655248f       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   40 seconds ago       Exited              kube-apiserver            1                   1a764b866f6df       kube-apiserver-kubernetes-upgrade-599578
	3451a223480bb       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   40 seconds ago       Exited              etcd                      1                   84526cb440b70       etcd-kubernetes-upgrade-599578
	9ca53dbc33da7       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   About a minute ago   Exited              kube-proxy                0                   a2b7eb248ac36       kube-proxy-k8fcw
	
	
	==> coredns [433673b6385caffeb1e365357ea12204d2eb2ca31ebde5d1289784233cf87106] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [5893652f434736a8d4f973ae96269b992993855d6ade0e3f5a6ac485b0bc7d10] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [94a8cd83222df3fe280fcdd67626421057ba90ed96b414467588f53bf63dff28] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [fad6e9d2f53ac3020ce0a4397bc7d7cea6ff1d3290e36afeeeb0adcdfc8d1d7f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-599578
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-599578
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:38:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-599578
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:39:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:39:12 +0000   Mon, 18 Mar 2024 13:37:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:39:12 +0000   Mon, 18 Mar 2024 13:37:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:39:12 +0000   Mon, 18 Mar 2024 13:37:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:39:12 +0000   Mon, 18 Mar 2024 13:38:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.167
	  Hostname:    kubernetes-upgrade-599578
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 f39bedabbd714cd39591321fcb161969
	  System UUID:                f39bedab-bd71-4cd3-9591-321fcb161969
	  Boot ID:                    a5187a8c-6605-4468-9f6e-abaa74980923
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-rhkvl                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     61s
	  kube-system                 coredns-76f75df574-xrl4t                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     61s
	  kube-system                 etcd-kubernetes-upgrade-599578                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         71s
	  kube-system                 kube-apiserver-kubernetes-upgrade-599578             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-599578    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-proxy-k8fcw                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 kube-scheduler-kubernetes-upgrade-599578             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         70s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 36s                kube-proxy       
	  Normal  Starting                 60s                kube-proxy       
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 82s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  81s (x8 over 82s)  kubelet          Node kubernetes-upgrade-599578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     81s (x7 over 82s)  kubelet          Node kubernetes-upgrade-599578 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    81s (x8 over 82s)  kubelet          Node kubernetes-upgrade-599578 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           62s                node-controller  Node kubernetes-upgrade-599578 event: Registered Node kubernetes-upgrade-599578 in Controller
	  Normal  RegisteredNode           24s                node-controller  Node kubernetes-upgrade-599578 event: Registered Node kubernetes-upgrade-599578 in Controller
	  Normal  Starting                 11s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node kubernetes-upgrade-599578 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node kubernetes-upgrade-599578 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x7 over 11s)  kubelet          Node kubernetes-upgrade-599578 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  11s                kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.365548] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.069923] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.073043] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.207027] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.178546] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.284261] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +5.375875] systemd-fstab-generator[730]: Ignoring "noauto" option for root device
	[  +0.074591] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.287800] systemd-fstab-generator[856]: Ignoring "noauto" option for root device
	[Mar18 13:38] kauditd_printk_skb: 97 callbacks suppressed
	[  +0.584506] systemd-fstab-generator[1233]: Ignoring "noauto" option for root device
	[  +9.688294] kauditd_printk_skb: 15 callbacks suppressed
	[ +17.800106] systemd-fstab-generator[2008]: Ignoring "noauto" option for root device
	[  +0.085264] kauditd_printk_skb: 64 callbacks suppressed
	[  +0.064899] systemd-fstab-generator[2020]: Ignoring "noauto" option for root device
	[  +0.212500] systemd-fstab-generator[2034]: Ignoring "noauto" option for root device
	[  +0.153946] systemd-fstab-generator[2046]: Ignoring "noauto" option for root device
	[  +0.296185] systemd-fstab-generator[2070]: Ignoring "noauto" option for root device
	[  +2.047529] systemd-fstab-generator[2654]: Ignoring "noauto" option for root device
	[  +4.251134] kauditd_printk_skb: 234 callbacks suppressed
	[Mar18 13:39] systemd-fstab-generator[3418]: Ignoring "noauto" option for root device
	[  +0.109461] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.592729] kauditd_printk_skb: 38 callbacks suppressed
	[  +3.646346] systemd-fstab-generator[3851]: Ignoring "noauto" option for root device
	
	
	==> etcd [3451a223480bb2b0813293b9abaa913e8dc994bb2efebea0ae37b3496e98879d] <==
	{"level":"info","ts":"2024-03-18T13:38:39.52951Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2024-03-18T13:38:40.876939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T13:38:40.877055Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:38:40.87713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac received MsgPreVoteResp from 20d5e93d92ee8fac at term 2"}
	{"level":"info","ts":"2024-03-18T13:38:40.877169Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T13:38:40.877194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac received MsgVoteResp from 20d5e93d92ee8fac at term 3"}
	{"level":"info","ts":"2024-03-18T13:38:40.877224Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"20d5e93d92ee8fac became leader at term 3"}
	{"level":"info","ts":"2024-03-18T13:38:40.877259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 20d5e93d92ee8fac elected leader 20d5e93d92ee8fac at term 3"}
	{"level":"info","ts":"2024-03-18T13:38:40.885287Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"20d5e93d92ee8fac","local-member-attributes":"{Name:kubernetes-upgrade-599578 ClientURLs:[https://192.168.39.167:2379]}","request-path":"/0/members/20d5e93d92ee8fac/attributes","cluster-id":"31f708155da0e645","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:38:40.88684Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:38:40.926542Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:38:40.975998Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:38:40.980176Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:38:40.980643Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T13:38:40.983711Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.167:2379"}
	{"level":"info","ts":"2024-03-18T13:39:06.077618Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-18T13:39:06.077678Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-599578","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.167:2380"],"advertise-client-urls":["https://192.168.39.167:2379"]}
	{"level":"warn","ts":"2024-03-18T13:39:06.077872Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:39:06.077901Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:39:06.079943Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.167:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-18T13:39:06.079997Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.167:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-18T13:39:06.080077Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"20d5e93d92ee8fac","current-leader-member-id":"20d5e93d92ee8fac"}
	{"level":"info","ts":"2024-03-18T13:39:06.083233Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2024-03-18T13:39:06.083359Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.167:2380"}
	{"level":"info","ts":"2024-03-18T13:39:06.08337Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-599578","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.167:2380"],"advertise-client-urls":["https://192.168.39.167:2379"]}
	
	
	==> etcd [7f62fccc11a119101aa9007b17da387744e8d92bfc5b04585134ed40741ded7e] <==
	{"level":"warn","ts":"2024-03-18T13:39:15.56199Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:39:14.713519Z","time spent":"848.458428ms","remote":"127.0.0.1:47286","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":1,"response size":223,"request content":"key:\"/registry/serviceaccounts/kube-system/cronjob-controller\" "}
	{"level":"warn","ts":"2024-03-18T13:39:15.561493Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"342.897796ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/storage-provisioner\" ","response":"range_response_count:1 size:4281"}
	{"level":"info","ts":"2024-03-18T13:39:15.56366Z","caller":"traceutil/trace.go:171","msg":"trace[1644519660] range","detail":"{range_begin:/registry/pods/kube-system/storage-provisioner; range_end:; response_count:1; response_revision:512; }","duration":"345.060322ms","start":"2024-03-18T13:39:15.218585Z","end":"2024-03-18T13:39:15.563645Z","steps":["trace[1644519660] 'agreement among raft nodes before linearized reading'  (duration: 342.868528ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:39:15.56413Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:39:15.218568Z","time spent":"345.471599ms","remote":"127.0.0.1:47276","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":4303,"request content":"key:\"/registry/pods/kube-system/storage-provisioner\" "}
	{"level":"info","ts":"2024-03-18T13:39:16.053442Z","caller":"traceutil/trace.go:171","msg":"trace[1474449724] transaction","detail":"{read_only:false; response_revision:513; number_of_response:1; }","duration":"478.376437ms","start":"2024-03-18T13:39:15.57505Z","end":"2024-03-18T13:39:16.053426Z","steps":["trace[1474449724] 'process raft request'  (duration: 478.14292ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:39:16.053586Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:39:15.57503Z","time spent":"478.495857ms","remote":"127.0.0.1:47276","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4375,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:506 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:4321 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"warn","ts":"2024-03-18T13:39:16.657981Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"510.058839ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10352806125343405108 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:coredns\" value_size:348 >> failure:<>>","response":"size:5"}
	{"level":"info","ts":"2024-03-18T13:39:16.658384Z","caller":"traceutil/trace.go:171","msg":"trace[268418361] linearizableReadLoop","detail":"{readStateIndex:539; appliedIndex:538; }","duration":"1.076667513s","start":"2024-03-18T13:39:15.58154Z","end":"2024-03-18T13:39:16.658207Z","steps":["trace[268418361] 'read index received'  (duration: 472.533539ms)","trace[268418361] 'applied index is now lower than readState.Index'  (duration: 604.133078ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T13:39:16.658517Z","caller":"traceutil/trace.go:171","msg":"trace[428699557] transaction","detail":"{read_only:false; number_of_response:0; response_revision:513; }","duration":"1.07711985s","start":"2024-03-18T13:39:15.581388Z","end":"2024-03-18T13:39:16.658508Z","steps":["trace[428699557] 'process raft request'  (duration: 566.483126ms)","trace[428699557] 'compare'  (duration: 510.025944ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T13:39:16.658643Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:39:15.58137Z","time spent":"1.077237768s","remote":"127.0.0.1:47422","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":27,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/system:coredns\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:coredns\" value_size:348 >> failure:<>"}
	{"level":"warn","ts":"2024-03-18T13:39:16.658724Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:39:16.305467Z","time spent":"353.255474ms","remote":"127.0.0.1:47174","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2024-03-18T13:39:16.659045Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.077511941s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/token-cleaner\" ","response":"range_response_count:1 size:191"}
	{"level":"info","ts":"2024-03-18T13:39:16.659109Z","caller":"traceutil/trace.go:171","msg":"trace[1981552499] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/token-cleaner; range_end:; response_count:1; response_revision:513; }","duration":"1.077582061s","start":"2024-03-18T13:39:15.581516Z","end":"2024-03-18T13:39:16.659098Z","steps":["trace[1981552499] 'agreement among raft nodes before linearized reading'  (duration: 1.077476922s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:39:16.659165Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:39:15.581508Z","time spent":"1.077647052s","remote":"127.0.0.1:47286","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":213,"request content":"key:\"/registry/serviceaccounts/kube-system/token-cleaner\" "}
	{"level":"warn","ts":"2024-03-18T13:39:16.65921Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"601.774068ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-76f75df574-rhkvl\" ","response":"range_response_count:1 size:5023"}
	{"level":"info","ts":"2024-03-18T13:39:16.659276Z","caller":"traceutil/trace.go:171","msg":"trace[1353560610] range","detail":"{range_begin:/registry/pods/kube-system/coredns-76f75df574-rhkvl; range_end:; response_count:1; response_revision:513; }","duration":"601.835492ms","start":"2024-03-18T13:39:16.057431Z","end":"2024-03-18T13:39:16.659266Z","steps":["trace[1353560610] 'agreement among raft nodes before linearized reading'  (duration: 601.745437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:39:16.659304Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T13:39:16.057419Z","time spent":"601.877654ms","remote":"127.0.0.1:47276","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":1,"response size":5045,"request content":"key:\"/registry/pods/kube-system/coredns-76f75df574-rhkvl\" "}
	{"level":"info","ts":"2024-03-18T13:39:16.900734Z","caller":"traceutil/trace.go:171","msg":"trace[1043477826] transaction","detail":"{read_only:false; response_revision:516; number_of_response:1; }","duration":"222.526933ms","start":"2024-03-18T13:39:16.67819Z","end":"2024-03-18T13:39:16.900717Z","steps":["trace[1043477826] 'process raft request'  (duration: 222.469632ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:39:16.901391Z","caller":"traceutil/trace.go:171","msg":"trace[544989687] transaction","detail":"{read_only:false; response_revision:515; number_of_response:1; }","duration":"231.76893ms","start":"2024-03-18T13:39:16.669596Z","end":"2024-03-18T13:39:16.901365Z","steps":["trace[544989687] 'process raft request'  (duration: 223.565247ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:39:16.901422Z","caller":"traceutil/trace.go:171","msg":"trace[1314001809] linearizableReadLoop","detail":"{readStateIndex:542; appliedIndex:541; }","duration":"226.970871ms","start":"2024-03-18T13:39:16.674444Z","end":"2024-03-18T13:39:16.901415Z","steps":["trace[1314001809] 'read index received'  (duration: 218.837568ms)","trace[1314001809] 'applied index is now lower than readState.Index'  (duration: 8.132681ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T13:39:16.901495Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"233.084388ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-03-18T13:39:16.906196Z","caller":"traceutil/trace.go:171","msg":"trace[803288611] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:516; }","duration":"237.796258ms","start":"2024-03-18T13:39:16.668386Z","end":"2024-03-18T13:39:16.906183Z","steps":["trace[803288611] 'agreement among raft nodes before linearized reading'  (duration: 233.053931ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T13:39:16.9065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.240709ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/system:coredns\" ","response":"range_response_count:1 size:415"}
	{"level":"info","ts":"2024-03-18T13:39:16.906683Z","caller":"traceutil/trace.go:171","msg":"trace[959839995] range","detail":"{range_begin:/registry/clusterrolebindings/system:coredns; range_end:; response_count:1; response_revision:516; }","duration":"228.427826ms","start":"2024-03-18T13:39:16.678243Z","end":"2024-03-18T13:39:16.906671Z","steps":["trace[959839995] 'agreement among raft nodes before linearized reading'  (duration: 228.150186ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T13:39:19.047095Z","caller":"traceutil/trace.go:171","msg":"trace[707958731] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"173.039369ms","start":"2024-03-18T13:39:18.874037Z","end":"2024-03-18T13:39:19.047076Z","steps":["trace[707958731] 'process raft request'  (duration: 172.92907ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:39:20 up 1 min,  0 users,  load average: 1.47, 0.54, 0.20
	Linux kubernetes-upgrade-599578 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a227e83abce9d118ceb55dc29094f41f050fe271bf4aa41503efb3cac0f9e661] <==
	Trace[542912788]:  ---"Txn call completed" 689ms (13:39:15.215)]
	Trace[542912788]: ---"Object stored in database" 689ms (13:39:15.215)
	Trace[542912788]: [692.992871ms] [692.992871ms] END
	I0318 13:39:15.563072       1 trace.go:236] Trace[289495456]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:4d829404-5be4-48f8-aace-ef53580b03b9,client:192.168.39.167,api-group:,api-version:v1,name:cronjob-controller,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/cronjob-controller,user-agent:kube-controller-manager/v1.29.0 (linux/amd64) kubernetes/e4636d0/kube-controller-manager,verb:GET (18-Mar-2024 13:39:14.712) (total time: 850ms):
	Trace[289495456]: ---"About to write a response" 850ms (13:39:15.562)
	Trace[289495456]: [850.852448ms] [850.852448ms] END
	I0318 13:39:15.563614       1 trace.go:236] Trace[531204105]: "Create" accept:application/json, */*,audit-id:a9236b83-70f0-462e-a351-5f9f3f4c8e16,client:192.168.39.167,api-group:rbac.authorization.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:clusterroles,scope:resource,url:/apis/rbac.authorization.k8s.io/v1/clusterroles,user-agent:kubeadm/v1.29.0 (linux/amd64) kubernetes/e4636d0,verb:POST (18-Mar-2024 13:39:14.531) (total time: 1031ms):
	Trace[531204105]: ["Create etcd3" audit-id:a9236b83-70f0-462e-a351-5f9f3f4c8e16,key:/clusterroles/system:coredns,type:*rbac.ClusterRole,resource:clusterroles.rbac.authorization.k8s.io 1031ms (13:39:14.532)
	Trace[531204105]:  ---"Txn call succeeded" 1029ms (13:39:15.561)]
	Trace[531204105]: [1.031757304s] [1.031757304s] END
	I0318 13:39:16.660162       1 trace.go:236] Trace[1072261813]: "Get" accept:application/vnd.kubernetes.protobuf, */*,audit-id:d446fc8f-8794-442c-b542-f306e8707497,client:192.168.39.167,api-group:,api-version:v1,name:token-cleaner,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:serviceaccounts,scope:resource,url:/api/v1/namespaces/kube-system/serviceaccounts/token-cleaner,user-agent:kube-controller-manager/v1.29.0 (linux/amd64) kubernetes/e4636d0/kube-controller-manager,verb:GET (18-Mar-2024 13:39:15.573) (total time: 1086ms):
	Trace[1072261813]: ---"About to write a response" 1086ms (13:39:16.660)
	Trace[1072261813]: [1.086898678s] [1.086898678s] END
	I0318 13:39:16.661243       1 trace.go:236] Trace[1530461522]: "Get" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:1d20556d-09d6-4785-899f-2ebfd3e7c48a,client:192.168.39.167,api-group:,api-version:v1,name:coredns-76f75df574-rhkvl,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/coredns-76f75df574-rhkvl,user-agent:kubelet/v1.29.0 (linux/amd64) kubernetes/e4636d0,verb:GET (18-Mar-2024 13:39:16.056) (total time: 604ms):
	Trace[1530461522]: ---"About to write a response" 603ms (13:39:16.660)
	Trace[1530461522]: [604.21125ms] [604.21125ms] END
	I0318 13:39:16.675065       1 trace.go:236] Trace[271784428]: "Create" accept:application/json, */*,audit-id:79ac4bac-6bf7-4c89-b282-6b9a462b284c,client:192.168.39.167,api-group:rbac.authorization.k8s.io,api-version:v1,name:,subresource:,namespace:,protocol:HTTP/2.0,resource:clusterrolebindings,scope:resource,url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings,user-agent:kubeadm/v1.29.0 (linux/amd64) kubernetes/e4636d0,verb:POST (18-Mar-2024 13:39:15.578) (total time: 1096ms):
	Trace[271784428]: ["Create etcd3" audit-id:79ac4bac-6bf7-4c89-b282-6b9a462b284c,key:/clusterrolebindings/system:coredns,type:*rbac.ClusterRoleBinding,resource:clusterrolebindings.rbac.authorization.k8s.io 1095ms (13:39:15.579)
	Trace[271784428]:  ---"Txn call succeeded" 1081ms (13:39:16.660)]
	Trace[271784428]: [1.096331352s] [1.096331352s] END
	I0318 13:39:16.916327       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:39:16.936523       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:39:17.003083       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:39:17.047631       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:39:17.063023       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [e83860655248fab28ff88d92626bd49f9802b2970083f1116c50b945b4ef6d63] <==
	I0318 13:38:55.982047       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0318 13:38:55.982083       1 crd_finalizer.go:278] Shutting down CRDFinalizer
	I0318 13:38:55.982094       1 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:38:55.982103       1 nonstructuralschema_controller.go:204] Shutting down NonStructuralSchemaConditionController
	I0318 13:38:55.982111       1 establishing_controller.go:87] Shutting down EstablishingController
	I0318 13:38:55.982120       1 naming_controller.go:302] Shutting down NamingConditionController
	I0318 13:38:55.982129       1 controller.go:115] Shutting down OpenAPI V3 controller
	I0318 13:38:55.982165       1 controller.go:161] Shutting down OpenAPI controller
	I0318 13:38:55.982175       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0318 13:38:55.982226       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0318 13:38:55.982264       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:38:55.982676       1 system_namespaces_controller.go:77] Shutting down system namespaces controller
	I0318 13:38:55.982737       1 controller.go:159] Shutting down quota evaluator
	I0318 13:38:55.982817       1 controller.go:178] quota evaluator worker shutdown
	I0318 13:38:55.981999       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0318 13:38:55.982054       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0318 13:38:55.982135       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0318 13:38:55.983513       1 controller.go:178] quota evaluator worker shutdown
	I0318 13:38:55.983560       1 controller.go:178] quota evaluator worker shutdown
	I0318 13:38:55.983570       1 controller.go:178] quota evaluator worker shutdown
	I0318 13:38:55.983577       1 controller.go:178] quota evaluator worker shutdown
	I0318 13:38:55.981924       1 available_controller.go:439] Shutting down AvailableConditionController
	I0318 13:38:55.984734       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0318 13:38:55.985885       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0318 13:38:55.981918       1 storage_flowcontrol.go:187] APF bootstrap ensurer is exiting
	
	
	==> kube-controller-manager [6273e0a259f8224c8ae08bae845f326b6af55bf338ecd89398d56955ca0b7e0b] <==
	I0318 13:39:16.930161       1 controllermanager.go:735] "Started controller" controller="serviceaccount-controller"
	I0318 13:39:16.930446       1 serviceaccounts_controller.go:111] "Starting service account controller"
	I0318 13:39:16.930488       1 shared_informer.go:311] Waiting for caches to sync for service account
	I0318 13:39:16.935062       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0318 13:39:16.935299       1 ttl_controller.go:124] "Starting TTL controller"
	I0318 13:39:16.935341       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0318 13:39:16.965006       1 controllermanager.go:735] "Started controller" controller="namespace-controller"
	I0318 13:39:16.965164       1 namespace_controller.go:197] "Starting namespace controller"
	I0318 13:39:16.965196       1 shared_informer.go:311] Waiting for caches to sync for namespace
	I0318 13:39:16.975229       1 controllermanager.go:735] "Started controller" controller="statefulset-controller"
	I0318 13:39:16.975436       1 stateful_set.go:161] "Starting stateful set controller"
	I0318 13:39:16.976281       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0318 13:39:16.980012       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-serving"
	I0318 13:39:16.980052       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0318 13:39:16.980074       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:39:16.982607       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0318 13:39:16.982859       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kubelet-client"
	I0318 13:39:16.982937       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0318 13:39:16.983039       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-kube-apiserver-client"
	I0318 13:39:16.983095       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0318 13:39:16.983215       1 certificate_controller.go:115] "Starting certificate controller" name="csrsigning-legacy-unknown"
	I0318 13:39:16.983305       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0318 13:39:16.983444       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:39:16.983668       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0318 13:39:16.983740       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	
	
	==> kube-controller-manager [70b11ba22c45fb4b2c74b65de802b825d70384a1c92cac8adb36b424324cceff] <==
	I0318 13:38:55.358213       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0318 13:38:55.359205       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:38:55.359322       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0318 13:38:55.361940       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0318 13:38:55.364279       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:38:55.366388       1 shared_informer.go:318] Caches are synced for ephemeral
	I0318 13:38:55.369985       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I0318 13:38:55.372256       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:38:55.375853       1 shared_informer.go:318] Caches are synced for taint-eviction-controller
	I0318 13:38:55.382409       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:38:55.382494       1 node_lifecycle_controller.go:1222] "Initializing eviction metric for zone" zone=""
	I0318 13:38:55.382574       1 node_lifecycle_controller.go:874] "Missing timestamp for Node. Assuming now as a timestamp" node="kubernetes-upgrade-599578"
	I0318 13:38:55.382630       1 node_lifecycle_controller.go:1068] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 13:38:55.383099       1 event.go:376] "Event occurred" object="kubernetes-upgrade-599578" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node kubernetes-upgrade-599578 event: Registered Node kubernetes-upgrade-599578 in Controller"
	I0318 13:38:55.384379       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:38:55.411581       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:38:55.455261       1 shared_informer.go:318] Caches are synced for job
	I0318 13:38:55.458184       1 shared_informer.go:318] Caches are synced for TTL after finished
	I0318 13:38:55.466050       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:38:55.468579       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:38:55.562082       1 shared_informer.go:318] Caches are synced for cronjob
	I0318 13:38:55.624056       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="264.297424ms"
	I0318 13:38:55.630481       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="6.128699ms"
	I0318 13:38:55.736696       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="20.074541ms"
	I0318 13:38:55.736892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-76f75df574" duration="51.062µs"
	
	
	==> kube-proxy [50c78bf4575f4668417cbee1a5a02fafe056e10d8ad8a33b4cb5dabb4b60c7d6] <==
	I0318 13:38:41.640373       1 server_others.go:72] "Using iptables proxy"
	I0318 13:38:43.062344       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.167"]
	I0318 13:38:43.177119       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0318 13:38:43.177233       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:38:43.177325       1 server_others.go:168] "Using iptables Proxier"
	I0318 13:38:43.189649       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:38:43.190068       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0318 13:38:43.190131       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:38:43.192353       1 config.go:188] "Starting service config controller"
	I0318 13:38:43.192431       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:38:43.192632       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:38:43.192673       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:38:43.193241       1 config.go:315] "Starting node config controller"
	I0318 13:38:43.196219       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:38:43.292993       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:38:43.293073       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:38:43.296711       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [9ca53dbc33da78c8a5a1ffb9dfb052ace358fae29acd0caa619072e1dd7d6c7b] <==
	I0318 13:38:18.959135       1 server_others.go:72] "Using iptables proxy"
	I0318 13:38:19.012397       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.167"]
	I0318 13:38:19.089457       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0318 13:38:19.089533       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:38:19.089554       1 server_others.go:168] "Using iptables Proxier"
	I0318 13:38:19.094152       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:38:19.094491       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0318 13:38:19.094543       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:38:19.132877       1 config.go:188] "Starting service config controller"
	I0318 13:38:19.133891       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:38:19.134082       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:38:19.134194       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:38:19.146607       1 config.go:315] "Starting node config controller"
	I0318 13:38:19.148906       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:38:19.234721       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:38:19.234872       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:38:19.263825       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7b7a94ac13e90a60fac6d854bcfab338c54fcf51c61cf9fb5c00c6c9a6b4667c] <==
	I0318 13:38:41.785340       1 serving.go:380] Generated self-signed cert in-memory
	W0318 13:38:42.947198       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 13:38:42.947372       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:38:42.947405       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 13:38:42.947509       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:38:43.026306       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0318 13:38:43.026929       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:38:43.050305       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:38:43.050556       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:38:43.055996       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:38:43.059963       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:38:43.162019       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:38:55.656119       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0318 13:38:55.661388       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:38:55.665940       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0318 13:38:55.668076       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a8e1b3b0b3b47c104def2ad7a4aeab2333a8eb720010b275c1543d8da575c3fc] <==
	I0318 13:39:09.764452       1 serving.go:380] Generated self-signed cert in-memory
	W0318 13:39:12.706825       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 13:39:12.706977       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:39:12.707023       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 13:39:12.707056       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:39:12.822230       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0318 13:39:12.822280       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:39:12.826092       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:39:12.826221       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:39:12.826231       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:39:12.826244       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:39:12.928870       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 13:39:08 kubernetes-upgrade-599578 kubelet[3425]: E0318 13:39:08.889145    3425 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.167:8443: connect: connection refused" node="kubernetes-upgrade-599578"
	Mar 18 13:39:08 kubernetes-upgrade-599578 kubelet[3425]: W0318 13:39:08.977737    3425 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-599578&limit=500&resourceVersion=0": dial tcp 192.168.39.167:8443: connect: connection refused
	Mar 18 13:39:08 kubernetes-upgrade-599578 kubelet[3425]: E0318 13:39:08.977951    3425 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-599578&limit=500&resourceVersion=0": dial tcp 192.168.39.167:8443: connect: connection refused
	Mar 18 13:39:09 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:09.691703    3425 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-599578"
	Mar 18 13:39:12 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:12.856678    3425 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-599578"
	Mar 18 13:39:12 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:12.856871    3425 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-599578"
	Mar 18 13:39:12 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:12.863689    3425 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 18 13:39:12 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:12.867108    3425 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.157889    3425 apiserver.go:52] "Watching apiserver"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.162202    3425 topology_manager.go:215] "Topology Admit Handler" podUID="e36c9d35-026f-4a88-9287-13d6e73dd79a" podNamespace="kube-system" podName="storage-provisioner"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.162321    3425 topology_manager.go:215] "Topology Admit Handler" podUID="f42ec031-02b9-4c21-8147-b702445ffd7f" podNamespace="kube-system" podName="coredns-76f75df574-rhkvl"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.162366    3425 topology_manager.go:215] "Topology Admit Handler" podUID="5ec5a675-d4f6-4a56-826a-e5879af02113" podNamespace="kube-system" podName="coredns-76f75df574-xrl4t"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.162419    3425 topology_manager.go:215] "Topology Admit Handler" podUID="ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9" podNamespace="kube-system" podName="kube-proxy-k8fcw"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.177448    3425 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.238589    3425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9-xtables-lock\") pod \"kube-proxy-k8fcw\" (UID: \"ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9\") " pod="kube-system/kube-proxy-k8fcw"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.239047    3425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e36c9d35-026f-4a88-9287-13d6e73dd79a-tmp\") pod \"storage-provisioner\" (UID: \"e36c9d35-026f-4a88-9287-13d6e73dd79a\") " pod="kube-system/storage-provisioner"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.239182    3425 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9-lib-modules\") pod \"kube-proxy-k8fcw\" (UID: \"ba0363dd-8f97-4dd6-a0fa-0cd0a825c6c9\") " pod="kube-system/kube-proxy-k8fcw"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.464180    3425 scope.go:117] "RemoveContainer" containerID="fad6e9d2f53ac3020ce0a4397bc7d7cea6ff1d3290e36afeeeb0adcdfc8d1d7f"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.464522    3425 scope.go:117] "RemoveContainer" containerID="433673b6385caffeb1e365357ea12204d2eb2ca31ebde5d1289784233cf87106"
	Mar 18 13:39:13 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:13.464997    3425 scope.go:117] "RemoveContainer" containerID="96da50e7cc7db82a5110c14eccc2d482cee3219f75d3ccc87c95df83b5ee06cc"
	Mar 18 13:39:14 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:14.467655    3425 scope.go:117] "RemoveContainer" containerID="96da50e7cc7db82a5110c14eccc2d482cee3219f75d3ccc87c95df83b5ee06cc"
	Mar 18 13:39:14 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:14.468045    3425 scope.go:117] "RemoveContainer" containerID="b1acd8017fdaec301bf8f69e0fb68bb34115ea51c0bae7b51abb0f36ab372207"
	Mar 18 13:39:14 kubernetes-upgrade-599578 kubelet[3425]: E0318 13:39:14.468205    3425 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e36c9d35-026f-4a88-9287-13d6e73dd79a)\"" pod="kube-system/storage-provisioner" podUID="e36c9d35-026f-4a88-9287-13d6e73dd79a"
	Mar 18 13:39:18 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:18.863036    3425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 18 13:39:19 kubernetes-upgrade-599578 kubelet[3425]: I0318 13:39:19.843595    3425 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [b1acd8017fdaec301bf8f69e0fb68bb34115ea51c0bae7b51abb0f36ab372207] <==
	I0318 13:39:13.743351       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0318 13:39:13.746551       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:39:18.682857 1154594 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18429-1106816/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-599578 -n kubernetes-upgrade-599578
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-599578 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-599578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-599578
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-599578: (1.176604511s)
--- FAIL: TestKubernetesUpgrade (432.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (168.65s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-760389 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-760389 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (2m40.183194301s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-760389] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-760389" primary control-plane node in "pause-760389" cluster
	* Updating the running kvm2 "pause-760389" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-760389" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:37:49.622738 1153618 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:37:49.622886 1153618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:49.622901 1153618 out.go:304] Setting ErrFile to fd 2...
	I0318 13:37:49.622927 1153618 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:37:49.623594 1153618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:37:49.624261 1153618 out.go:298] Setting JSON to false
	I0318 13:37:49.625389 1153618 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19217,"bootTime":1710749853,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:37:49.625468 1153618 start.go:139] virtualization: kvm guest
	I0318 13:37:49.628074 1153618 out.go:177] * [pause-760389] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:37:49.629753 1153618 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:37:49.629779 1153618 notify.go:220] Checking for updates...
	I0318 13:37:49.631335 1153618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:37:49.632979 1153618 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:37:49.634462 1153618 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:37:49.636002 1153618 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:37:49.637521 1153618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:37:49.639636 1153618 config.go:182] Loaded profile config "pause-760389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:37:49.640238 1153618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:37:49.640342 1153618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:37:49.656954 1153618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46607
	I0318 13:37:49.657390 1153618 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:37:49.658064 1153618 main.go:141] libmachine: Using API Version  1
	I0318 13:37:49.658089 1153618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:37:49.658546 1153618 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:37:49.658776 1153618 main.go:141] libmachine: (pause-760389) Calling .DriverName
	I0318 13:37:49.659077 1153618 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:37:49.659405 1153618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:37:49.659442 1153618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:37:49.675033 1153618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35891
	I0318 13:37:49.675490 1153618 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:37:49.675989 1153618 main.go:141] libmachine: Using API Version  1
	I0318 13:37:49.676011 1153618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:37:49.676424 1153618 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:37:49.676625 1153618 main.go:141] libmachine: (pause-760389) Calling .DriverName
	I0318 13:37:50.331196 1153618 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:37:50.332462 1153618 start.go:297] selected driver: kvm2
	I0318 13:37:50.332480 1153618 start.go:901] validating driver "kvm2" against &{Name:pause-760389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.28.4 ClusterName:pause-760389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-dev
ice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:37:50.332650 1153618 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:37:50.333146 1153618 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:37:50.333237 1153618 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:37:50.354590 1153618 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:37:50.355850 1153618 cni.go:84] Creating CNI manager for ""
	I0318 13:37:50.355886 1153618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:37:50.355976 1153618 start.go:340] cluster config:
	{Name:pause-760389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-760389 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:
false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:37:50.356190 1153618 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:37:50.358355 1153618 out.go:177] * Starting "pause-760389" primary control-plane node in "pause-760389" cluster
	I0318 13:37:50.359561 1153618 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:37:50.359607 1153618 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:37:50.359616 1153618 cache.go:56] Caching tarball of preloaded images
	I0318 13:37:50.359711 1153618 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:37:50.359726 1153618 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:37:50.359885 1153618 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/pause-760389/config.json ...
	I0318 13:37:50.360126 1153618 start.go:360] acquireMachinesLock for pause-760389: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:38:19.442191 1153618 start.go:364] duration metric: took 29.082018495s to acquireMachinesLock for "pause-760389"
	I0318 13:38:19.442248 1153618 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:38:19.442257 1153618 fix.go:54] fixHost starting: 
	I0318 13:38:19.442724 1153618 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:38:19.442784 1153618 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:38:19.464456 1153618 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35175
	I0318 13:38:19.465012 1153618 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:38:19.465534 1153618 main.go:141] libmachine: Using API Version  1
	I0318 13:38:19.465565 1153618 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:38:19.465881 1153618 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:38:19.466077 1153618 main.go:141] libmachine: (pause-760389) Calling .DriverName
	I0318 13:38:19.466221 1153618 main.go:141] libmachine: (pause-760389) Calling .GetState
	I0318 13:38:19.467837 1153618 fix.go:112] recreateIfNeeded on pause-760389: state=Running err=<nil>
	W0318 13:38:19.467855 1153618 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:38:19.470262 1153618 out.go:177] * Updating the running kvm2 "pause-760389" VM ...
	I0318 13:38:19.472082 1153618 machine.go:94] provisionDockerMachine start ...
	I0318 13:38:19.472111 1153618 main.go:141] libmachine: (pause-760389) Calling .DriverName
	I0318 13:38:19.472341 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHHostname
	I0318 13:38:19.475587 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:19.476008 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:19.476042 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:19.476232 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHPort
	I0318 13:38:19.476447 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:19.476621 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:19.476814 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHUsername
	I0318 13:38:19.477019 1153618 main.go:141] libmachine: Using SSH client type: native
	I0318 13:38:19.477210 1153618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0318 13:38:19.477221 1153618 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:38:19.599323 1153618 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-760389
	
	I0318 13:38:19.599360 1153618 main.go:141] libmachine: (pause-760389) Calling .GetMachineName
	I0318 13:38:19.599674 1153618 buildroot.go:166] provisioning hostname "pause-760389"
	I0318 13:38:19.599737 1153618 main.go:141] libmachine: (pause-760389) Calling .GetMachineName
	I0318 13:38:19.600001 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHHostname
	I0318 13:38:19.603675 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:19.604168 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:19.604205 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:19.604485 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHPort
	I0318 13:38:19.604711 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:19.604928 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:19.605195 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHUsername
	I0318 13:38:19.605420 1153618 main.go:141] libmachine: Using SSH client type: native
	I0318 13:38:19.605622 1153618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0318 13:38:19.605639 1153618 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-760389 && echo "pause-760389" | sudo tee /etc/hostname
	I0318 13:38:19.748635 1153618 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-760389
	
	I0318 13:38:19.748687 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHHostname
	I0318 13:38:19.752595 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:19.753130 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:19.753159 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:19.753395 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHPort
	I0318 13:38:19.753644 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:19.753863 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:19.754053 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHUsername
	I0318 13:38:19.754226 1153618 main.go:141] libmachine: Using SSH client type: native
	I0318 13:38:19.754520 1153618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0318 13:38:19.754547 1153618 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-760389' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-760389/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-760389' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:38:19.870651 1153618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:38:19.870693 1153618 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:38:19.870748 1153618 buildroot.go:174] setting up certificates
	I0318 13:38:19.870765 1153618 provision.go:84] configureAuth start
	I0318 13:38:19.870782 1153618 main.go:141] libmachine: (pause-760389) Calling .GetMachineName
	I0318 13:38:19.871122 1153618 main.go:141] libmachine: (pause-760389) Calling .GetIP
	I0318 13:38:19.874328 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:19.874790 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:19.874822 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:19.875019 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHHostname
	I0318 13:38:19.877988 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:19.878481 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:19.878516 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:19.878660 1153618 provision.go:143] copyHostCerts
	I0318 13:38:19.878738 1153618 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:38:19.878761 1153618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:38:19.878825 1153618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:38:19.878962 1153618 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:38:19.878974 1153618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:38:19.878998 1153618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:38:19.879077 1153618 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:38:19.879086 1153618 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:38:19.879103 1153618 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:38:19.879209 1153618 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.pause-760389 san=[127.0.0.1 192.168.50.203 localhost minikube pause-760389]
	I0318 13:38:20.148972 1153618 provision.go:177] copyRemoteCerts
	I0318 13:38:20.149030 1153618 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:38:20.149056 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHHostname
	I0318 13:38:20.152435 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:20.152956 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:20.152981 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:20.153214 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHPort
	I0318 13:38:20.153442 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:20.153669 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHUsername
	I0318 13:38:20.153853 1153618 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/pause-760389/id_rsa Username:docker}
	I0318 13:38:20.240986 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:38:20.274301 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0318 13:38:20.311588 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:38:20.359526 1153618 provision.go:87] duration metric: took 488.741451ms to configureAuth
	I0318 13:38:20.359565 1153618 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:38:20.359852 1153618 config.go:182] Loaded profile config "pause-760389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:38:20.359932 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHHostname
	I0318 13:38:20.362814 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:20.363267 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:20.363295 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:20.363509 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHPort
	I0318 13:38:20.363757 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:20.363940 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:20.364146 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHUsername
	I0318 13:38:20.364371 1153618 main.go:141] libmachine: Using SSH client type: native
	I0318 13:38:20.364592 1153618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0318 13:38:20.364618 1153618 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:38:28.142745 1153618 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:38:28.142783 1153618 machine.go:97] duration metric: took 8.670683125s to provisionDockerMachine
	I0318 13:38:28.142798 1153618 start.go:293] postStartSetup for "pause-760389" (driver="kvm2")
	I0318 13:38:28.142812 1153618 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:38:28.142835 1153618 main.go:141] libmachine: (pause-760389) Calling .DriverName
	I0318 13:38:28.143194 1153618 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:38:28.143228 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHHostname
	I0318 13:38:28.146497 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:28.147060 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:28.147092 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:28.147328 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHPort
	I0318 13:38:28.147556 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:28.147761 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHUsername
	I0318 13:38:28.147973 1153618 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/pause-760389/id_rsa Username:docker}
	I0318 13:38:28.240403 1153618 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:38:28.246034 1153618 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:38:28.246071 1153618 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:38:28.246148 1153618 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:38:28.246265 1153618 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:38:28.246415 1153618 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:38:28.259765 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:38:28.299169 1153618 start.go:296] duration metric: took 156.350665ms for postStartSetup
	I0318 13:38:28.299247 1153618 fix.go:56] duration metric: took 8.856988388s for fixHost
	I0318 13:38:28.299279 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHHostname
	I0318 13:38:28.302271 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:28.302588 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:28.302622 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:28.302811 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHPort
	I0318 13:38:28.303078 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:28.303287 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:28.303462 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHUsername
	I0318 13:38:28.303657 1153618 main.go:141] libmachine: Using SSH client type: native
	I0318 13:38:28.303889 1153618 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.203 22 <nil> <nil>}
	I0318 13:38:28.303905 1153618 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 13:38:28.430205 1153618 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769108.423095605
	
	I0318 13:38:28.430232 1153618 fix.go:216] guest clock: 1710769108.423095605
	I0318 13:38:28.430243 1153618 fix.go:229] Guest: 2024-03-18 13:38:28.423095605 +0000 UTC Remote: 2024-03-18 13:38:28.299255045 +0000 UTC m=+38.730656633 (delta=123.84056ms)
	I0318 13:38:28.430270 1153618 fix.go:200] guest clock delta is within tolerance: 123.84056ms
	I0318 13:38:28.430278 1153618 start.go:83] releasing machines lock for "pause-760389", held for 8.98805588s
	I0318 13:38:28.430310 1153618 main.go:141] libmachine: (pause-760389) Calling .DriverName
	I0318 13:38:28.430630 1153618 main.go:141] libmachine: (pause-760389) Calling .GetIP
	I0318 13:38:28.433879 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:28.434279 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:28.434323 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:28.434625 1153618 main.go:141] libmachine: (pause-760389) Calling .DriverName
	I0318 13:38:28.435430 1153618 main.go:141] libmachine: (pause-760389) Calling .DriverName
	I0318 13:38:28.435646 1153618 main.go:141] libmachine: (pause-760389) Calling .DriverName
	I0318 13:38:28.435757 1153618 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:38:28.435820 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHHostname
	I0318 13:38:28.435917 1153618 ssh_runner.go:195] Run: cat /version.json
	I0318 13:38:28.435947 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHHostname
	I0318 13:38:28.439012 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:28.439187 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:28.439454 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:28.439484 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:28.439542 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:38:28.439561 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:38:28.439639 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHPort
	I0318 13:38:28.439800 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHPort
	I0318 13:38:28.439886 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:28.440097 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHUsername
	I0318 13:38:28.440103 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHKeyPath
	I0318 13:38:28.440278 1153618 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/pause-760389/id_rsa Username:docker}
	I0318 13:38:28.440383 1153618 main.go:141] libmachine: (pause-760389) Calling .GetSSHUsername
	I0318 13:38:28.440594 1153618 sshutil.go:53] new ssh client: &{IP:192.168.50.203 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/pause-760389/id_rsa Username:docker}
	I0318 13:38:28.526235 1153618 ssh_runner.go:195] Run: systemctl --version
	I0318 13:38:28.550797 1153618 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:38:28.718463 1153618 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:38:28.727767 1153618 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:38:28.727846 1153618 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:38:28.739498 1153618 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 13:38:28.739526 1153618 start.go:494] detecting cgroup driver to use...
	I0318 13:38:28.739596 1153618 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:38:28.815640 1153618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:38:28.861492 1153618 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:38:28.861581 1153618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:38:29.012119 1153618 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:38:29.194727 1153618 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:38:29.653997 1153618 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:38:29.964214 1153618 docker.go:233] disabling docker service ...
	I0318 13:38:29.964290 1153618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:38:30.036766 1153618 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:38:30.060197 1153618 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:38:30.264483 1153618 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:38:30.526748 1153618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:38:30.578693 1153618 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:38:30.620225 1153618 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:38:30.620303 1153618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:38:30.635746 1153618 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:38:30.635833 1153618 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:38:30.654189 1153618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:38:30.674178 1153618 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:38:30.702863 1153618 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:38:30.728045 1153618 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:38:30.742933 1153618 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:38:30.756201 1153618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:38:31.007847 1153618 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:40:01.828035 1153618 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.82013329s)
	I0318 13:40:01.828086 1153618 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:40:01.828161 1153618 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:40:01.835278 1153618 start.go:562] Will wait 60s for crictl version
	I0318 13:40:01.835374 1153618 ssh_runner.go:195] Run: which crictl
	I0318 13:40:01.840360 1153618 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:40:01.897190 1153618 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:40:01.897287 1153618 ssh_runner.go:195] Run: crio --version
	I0318 13:40:01.937958 1153618 ssh_runner.go:195] Run: crio --version
	I0318 13:40:01.985785 1153618 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:40:01.987792 1153618 main.go:141] libmachine: (pause-760389) Calling .GetIP
	I0318 13:40:01.990719 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:40:01.991218 1153618 main.go:141] libmachine: (pause-760389) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1a:7b:34", ip: ""} in network mk-pause-760389: {Iface:virbr2 ExpiryTime:2024-03-18 14:36:23 +0000 UTC Type:0 Mac:52:54:00:1a:7b:34 Iaid: IPaddr:192.168.50.203 Prefix:24 Hostname:pause-760389 Clientid:01:52:54:00:1a:7b:34}
	I0318 13:40:01.991255 1153618 main.go:141] libmachine: (pause-760389) DBG | domain pause-760389 has defined IP address 192.168.50.203 and MAC address 52:54:00:1a:7b:34 in network mk-pause-760389
	I0318 13:40:01.991428 1153618 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 13:40:01.998024 1153618 kubeadm.go:877] updating cluster {Name:pause-760389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:pause-760389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:fals
e olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:40:01.998214 1153618 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:40:01.998282 1153618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:40:02.057915 1153618 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:40:02.057938 1153618 crio.go:415] Images already preloaded, skipping extraction
	I0318 13:40:02.057988 1153618 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:40:02.099035 1153618 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:40:02.099060 1153618 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:40:02.099068 1153618 kubeadm.go:928] updating node { 192.168.50.203 8443 v1.28.4 crio true true} ...
	I0318 13:40:02.099183 1153618 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-760389 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.203
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-760389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:40:02.099275 1153618 ssh_runner.go:195] Run: crio config
	I0318 13:40:02.153755 1153618 cni.go:84] Creating CNI manager for ""
	I0318 13:40:02.153776 1153618 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:40:02.153787 1153618 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:40:02.153811 1153618 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.203 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-760389 NodeName:pause-760389 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.203"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.203 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:40:02.153959 1153618 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.203
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-760389"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.203
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.203"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:40:02.154037 1153618 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:40:02.169482 1153618 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:40:02.169551 1153618 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:40:02.181738 1153618 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0318 13:40:02.203104 1153618 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:40:02.224606 1153618 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2156 bytes)
	I0318 13:40:02.245369 1153618 ssh_runner.go:195] Run: grep 192.168.50.203	control-plane.minikube.internal$ /etc/hosts
	I0318 13:40:02.250721 1153618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:40:02.407841 1153618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:40:02.425955 1153618 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/pause-760389 for IP: 192.168.50.203
	I0318 13:40:02.425984 1153618 certs.go:194] generating shared ca certs ...
	I0318 13:40:02.426007 1153618 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:40:02.426194 1153618 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:40:02.426252 1153618 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:40:02.426267 1153618 certs.go:256] generating profile certs ...
	I0318 13:40:02.426363 1153618 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/pause-760389/client.key
	I0318 13:40:02.426463 1153618 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/pause-760389/apiserver.key.454b6d57
	I0318 13:40:02.426525 1153618 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/pause-760389/proxy-client.key
	I0318 13:40:02.426660 1153618 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:40:02.426725 1153618 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:40:02.426740 1153618 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:40:02.426775 1153618 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:40:02.426808 1153618 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:40:02.426844 1153618 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:40:02.426905 1153618 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:40:02.427589 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:40:02.459083 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:40:02.491843 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:40:02.529232 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:40:02.558435 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/pause-760389/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0318 13:40:02.591004 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/pause-760389/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:40:02.625541 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/pause-760389/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:40:02.653910 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/pause-760389/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:40:02.695118 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:40:02.728085 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:40:02.757101 1153618 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:40:02.790124 1153618 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:40:02.809947 1153618 ssh_runner.go:195] Run: openssl version
	I0318 13:40:02.817209 1153618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:40:02.831537 1153618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:40:02.837077 1153618 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:40:02.837153 1153618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:40:02.843922 1153618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:40:02.856115 1153618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:40:02.870302 1153618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:40:02.875778 1153618 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:40:02.875850 1153618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:40:02.882874 1153618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:40:02.899075 1153618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:40:02.914795 1153618 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:40:02.920263 1153618 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:40:02.920339 1153618 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:40:02.927549 1153618 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:40:02.943253 1153618 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:40:02.948939 1153618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:40:02.957904 1153618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:40:02.966379 1153618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:40:02.973648 1153618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:40:02.980345 1153618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:40:02.986732 1153618 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:40:02.993071 1153618 kubeadm.go:391] StartCluster: {Name:pause-760389 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:pause-760389 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false o
lm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:40:02.993181 1153618 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:40:02.993231 1153618 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:40:03.036424 1153618 cri.go:89] found id: "65b9dc6324d7159773436a0616d877e6adac59f647e108f2ba4248c7807af7ef"
	I0318 13:40:03.036454 1153618 cri.go:89] found id: "039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a"
	I0318 13:40:03.036460 1153618 cri.go:89] found id: "6ceb76089cf49a3751e5ef3fa1430c9c0a22d1e5f51f63b2ff706c8ef7963629"
	I0318 13:40:03.036466 1153618 cri.go:89] found id: "c2e15f7167231f05b0facf70970fc16630b20767e97cd2c4e95ce3855eec2904"
	I0318 13:40:03.036470 1153618 cri.go:89] found id: "02095178fc24b970bb1be66ba74a0f5bb027d2f88fcd2d091dd6628a075850ab"
	I0318 13:40:03.036475 1153618 cri.go:89] found id: "0c4d5fb2691d7b313d5ded78fed5ae997dae1078685f3104974ab9b91a27c351"
	I0318 13:40:03.036479 1153618 cri.go:89] found id: "6c0843c62c3cd2e888f86e68904a413cd07c505dedf595a41f10ff68fec89cb5"
	I0318 13:40:03.036482 1153618 cri.go:89] found id: "f36e8110823805ddb71952396ef56fb131eec48da696e248a3a8899dbc28f18b"
	I0318 13:40:03.036488 1153618 cri.go:89] found id: "f176058e4f7e369a3a80318e562ab51fe28be28271843711c0b58a1418413fdc"
	I0318 13:40:03.036498 1153618 cri.go:89] found id: "02034eb7de2e4e70a79286d3c5ecb72e9846a9c0a8c88cc24e50797de7e18e76"
	I0318 13:40:03.036505 1153618 cri.go:89] found id: "2da96950a987f034f8bee5398861473b470130d82b237f71cb3fc3e7dbfbf1db"
	I0318 13:40:03.036509 1153618 cri.go:89] found id: "4b274991ccd1eb9db53ecb64dafb38bfa52e2759700a10a4bc7a82f1738745e5"
	I0318 13:40:03.036514 1153618 cri.go:89] found id: ""
	I0318 13:40:03.036570 1153618 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-760389 -n pause-760389
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-760389 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-760389 logs -n 25: (3.566302131s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-990886 sudo cat             | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | /etc/containerd/config.toml           |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo                 | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo                 | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo                 | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo find            | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo crio            | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-990886                      | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC | 18 Mar 24 13:35 UTC |
	| start   | -p cert-expiration-537883             | cert-expiration-537883    | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC | 18 Mar 24 13:37 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-375732           | force-systemd-env-375732  | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:36 UTC |
	| start   | -p force-systemd-flag-042940          | force-systemd-flag-042940 | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:37 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-599578          | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:36 UTC |
	| start   | -p kubernetes-upgrade-599578          | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:38 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-042940 ssh cat     | force-systemd-flag-042940 | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC | 18 Mar 24 13:37 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-042940          | force-systemd-flag-042940 | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC | 18 Mar 24 13:37 UTC |
	| start   | -p cert-options-959907                | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC | 18 Mar 24 13:38 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-760389                       | pause-760389              | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC | 18 Mar 24 13:40 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-599578          | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-599578          | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:39 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-959907 ssh               | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:38 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-959907 -- sudo        | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:38 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-959907                | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:38 UTC |
	| start   | -p old-k8s-version-909137             | old-k8s-version-909137    | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-599578          | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:39 UTC |
	| start   | -p no-preload-537236                  | no-preload-537236         | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC |                     |
	|         | --memory=2200 --alsologtostderr       |                           |         |         |                     |                     |
	|         | --wait=true --preload=false           |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	| start   | -p cert-expiration-537883             | cert-expiration-537883    | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:40:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:40:20.541843 1155127 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:40:20.542126 1155127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:20.542132 1155127 out.go:304] Setting ErrFile to fd 2...
	I0318 13:40:20.542135 1155127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:20.542296 1155127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:40:20.542836 1155127 out.go:298] Setting JSON to false
	I0318 13:40:20.543999 1155127 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19367,"bootTime":1710749853,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:40:20.544057 1155127 start.go:139] virtualization: kvm guest
	I0318 13:40:20.547211 1155127 out.go:177] * [cert-expiration-537883] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:40:20.548931 1155127 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:40:20.548947 1155127 notify.go:220] Checking for updates...
	I0318 13:40:20.550497 1155127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:40:20.551726 1155127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:40:20.553010 1155127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:40:20.554355 1155127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:40:20.555504 1155127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:40:20.557159 1155127 config.go:182] Loaded profile config "cert-expiration-537883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:40:20.557515 1155127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:40:20.557565 1155127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:40:20.572602 1155127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41087
	I0318 13:40:20.573017 1155127 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:40:20.573514 1155127 main.go:141] libmachine: Using API Version  1
	I0318 13:40:20.573531 1155127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:40:20.573900 1155127 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:40:20.574136 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:20.574385 1155127 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:40:20.574661 1155127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:40:20.574692 1155127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:40:20.590110 1155127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44151
	I0318 13:40:20.590552 1155127 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:40:20.591043 1155127 main.go:141] libmachine: Using API Version  1
	I0318 13:40:20.591064 1155127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:40:20.591372 1155127 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:40:20.591560 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:20.626799 1155127 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:40:20.628211 1155127 start.go:297] selected driver: kvm2
	I0318 13:40:20.628230 1155127 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-537883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:cert-expiration-537883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:40:20.628409 1155127 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:40:20.629133 1155127 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:40:20.629199 1155127 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:40:20.644035 1155127 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:40:20.644434 1155127 cni.go:84] Creating CNI manager for ""
	I0318 13:40:20.644449 1155127 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:40:20.644523 1155127 start.go:340] cluster config:
	{Name:cert-expiration-537883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-537883 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:40:20.644621 1155127 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:40:20.646451 1155127 out.go:177] * Starting "cert-expiration-537883" primary control-plane node in "cert-expiration-537883" cluster
	I0318 13:40:20.647867 1155127 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:40:20.647890 1155127 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:40:20.647895 1155127 cache.go:56] Caching tarball of preloaded images
	I0318 13:40:20.647981 1155127 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:40:20.647987 1155127 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:40:20.648075 1155127 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/config.json ...
	I0318 13:40:20.648247 1155127 start.go:360] acquireMachinesLock for cert-expiration-537883: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:40:20.648289 1155127 start.go:364] duration metric: took 30.913µs to acquireMachinesLock for "cert-expiration-537883"
	I0318 13:40:20.648304 1155127 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:40:20.648310 1155127 fix.go:54] fixHost starting: 
	I0318 13:40:20.648747 1155127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:40:20.648778 1155127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:40:20.662691 1155127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I0318 13:40:20.663095 1155127 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:40:20.663582 1155127 main.go:141] libmachine: Using API Version  1
	I0318 13:40:20.663591 1155127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:40:20.663869 1155127 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:40:20.664055 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:20.664184 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetState
	I0318 13:40:20.665820 1155127 fix.go:112] recreateIfNeeded on cert-expiration-537883: state=Running err=<nil>
	W0318 13:40:20.665831 1155127 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:40:20.667598 1155127 out.go:177] * Updating the running kvm2 "cert-expiration-537883" VM ...
	I0318 13:40:21.460120 1153618 pod_ready.go:102] pod "kube-apiserver-pause-760389" in "kube-system" namespace has status "Ready":"False"
	I0318 13:40:23.460867 1153618 pod_ready.go:102] pod "kube-apiserver-pause-760389" in "kube-system" namespace has status "Ready":"False"
	I0318 13:40:20.668937 1155127 machine.go:94] provisionDockerMachine start ...
	I0318 13:40:20.668951 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:20.669161 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:20.671836 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.672209 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:20.672229 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.672406 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:20.672576 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.672720 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.672827 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:20.672975 1155127 main.go:141] libmachine: Using SSH client type: native
	I0318 13:40:20.673202 1155127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0318 13:40:20.673209 1155127 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:40:20.790830 1155127 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-537883
	
	I0318 13:40:20.790851 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetMachineName
	I0318 13:40:20.791120 1155127 buildroot.go:166] provisioning hostname "cert-expiration-537883"
	I0318 13:40:20.791144 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetMachineName
	I0318 13:40:20.791324 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:20.794221 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.794587 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:20.794618 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.794740 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:20.794938 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.795095 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.795200 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:20.795317 1155127 main.go:141] libmachine: Using SSH client type: native
	I0318 13:40:20.795531 1155127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0318 13:40:20.795541 1155127 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-537883 && echo "cert-expiration-537883" | sudo tee /etc/hostname
	I0318 13:40:20.924016 1155127 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-537883
	
	I0318 13:40:20.924047 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:20.926804 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.927165 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:20.927190 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.927350 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:20.927538 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.927712 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.927843 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:20.927991 1155127 main.go:141] libmachine: Using SSH client type: native
	I0318 13:40:20.928164 1155127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0318 13:40:20.928208 1155127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-537883' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-537883/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-537883' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:40:21.041802 1155127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:40:21.041822 1155127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:40:21.041874 1155127 buildroot.go:174] setting up certificates
	I0318 13:40:21.041886 1155127 provision.go:84] configureAuth start
	I0318 13:40:21.041895 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetMachineName
	I0318 13:40:21.042206 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetIP
	I0318 13:40:21.045049 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.045332 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:21.045351 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.045478 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:21.047770 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.048141 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:21.048158 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.048252 1155127 provision.go:143] copyHostCerts
	I0318 13:40:21.048313 1155127 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:40:21.048319 1155127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:40:21.048417 1155127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:40:21.048501 1155127 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:40:21.048505 1155127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:40:21.048528 1155127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:40:21.048584 1155127 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:40:21.048587 1155127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:40:21.048605 1155127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:40:21.048644 1155127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-537883 san=[127.0.0.1 192.168.61.50 cert-expiration-537883 localhost minikube]
	I0318 13:40:21.259201 1155127 provision.go:177] copyRemoteCerts
	I0318 13:40:21.259253 1155127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:40:21.259275 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:21.262181 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.262510 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:21.262530 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.262689 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:21.262913 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:21.263087 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:21.263189 1155127 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/cert-expiration-537883/id_rsa Username:docker}
	I0318 13:40:21.353309 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:40:21.382293 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 13:40:21.410426 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:40:21.438863 1155127 provision.go:87] duration metric: took 396.965346ms to configureAuth
	I0318 13:40:21.438888 1155127 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:40:21.439081 1155127 config.go:182] Loaded profile config "cert-expiration-537883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:40:21.439155 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:21.442179 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.442555 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:21.442572 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.442720 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:21.442950 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:21.443119 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:21.443306 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:21.443505 1155127 main.go:141] libmachine: Using SSH client type: native
	I0318 13:40:21.443708 1155127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0318 13:40:21.443718 1155127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:40:25.962983 1153618 pod_ready.go:102] pod "kube-apiserver-pause-760389" in "kube-system" namespace has status "Ready":"False"
	I0318 13:40:26.470723 1153618 pod_ready.go:92] pod "kube-apiserver-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:26.470754 1153618 pod_ready.go:81] duration metric: took 7.018025307s for pod "kube-apiserver-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.470768 1153618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.478990 1153618 pod_ready.go:92] pod "kube-controller-manager-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:26.479016 1153618 pod_ready.go:81] duration metric: took 8.238913ms for pod "kube-controller-manager-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.479027 1153618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mmxmq" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.489669 1153618 pod_ready.go:92] pod "kube-proxy-mmxmq" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:26.489694 1153618 pod_ready.go:81] duration metric: took 10.659006ms for pod "kube-proxy-mmxmq" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.489702 1153618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.509269 1153618 pod_ready.go:92] pod "kube-scheduler-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:26.509298 1153618 pod_ready.go:81] duration metric: took 19.588627ms for pod "kube-scheduler-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.509307 1153618 pod_ready.go:38] duration metric: took 11.075178423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:40:26.509330 1153618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:40:26.525179 1153618 ops.go:34] apiserver oom_adj: -16
	I0318 13:40:26.525237 1153618 kubeadm.go:591] duration metric: took 23.416603733s to restartPrimaryControlPlane
	I0318 13:40:26.525251 1153618 kubeadm.go:393] duration metric: took 23.53218508s to StartCluster
	I0318 13:40:26.525272 1153618 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:40:26.525362 1153618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:40:26.526662 1153618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:40:26.526916 1153618 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:40:26.528899 1153618 out.go:177] * Verifying Kubernetes components...
	I0318 13:40:26.526981 1153618 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:40:26.527202 1153618 config.go:182] Loaded profile config "pause-760389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:40:26.530420 1153618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:40:26.531726 1153618 out.go:177] * Enabled addons: 
	I0318 13:40:27.105498 1155127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:40:27.105513 1155127 machine.go:97] duration metric: took 6.436568523s to provisionDockerMachine
	I0318 13:40:27.105524 1155127 start.go:293] postStartSetup for "cert-expiration-537883" (driver="kvm2")
	I0318 13:40:27.105534 1155127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:40:27.105550 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:27.106012 1155127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:40:27.106045 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:27.108744 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.109068 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:27.109086 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.109303 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:27.109493 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:27.109672 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:27.109790 1155127 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/cert-expiration-537883/id_rsa Username:docker}
	I0318 13:40:27.196215 1155127 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:40:27.201494 1155127 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:40:27.201509 1155127 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:40:27.201569 1155127 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:40:27.201635 1155127 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:40:27.201715 1155127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:40:27.213124 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:40:27.240439 1155127 start.go:296] duration metric: took 134.896944ms for postStartSetup
	I0318 13:40:27.240474 1155127 fix.go:56] duration metric: took 6.592164667s for fixHost
	I0318 13:40:27.240528 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:27.243152 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.243453 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:27.243480 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.243648 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:27.243805 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:27.243915 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:27.244007 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:27.244197 1155127 main.go:141] libmachine: Using SSH client type: native
	I0318 13:40:27.244428 1155127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0318 13:40:27.244433 1155127 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:40:27.353442 1155127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769227.342092028
	
	I0318 13:40:27.353458 1155127 fix.go:216] guest clock: 1710769227.342092028
	I0318 13:40:27.353468 1155127 fix.go:229] Guest: 2024-03-18 13:40:27.342092028 +0000 UTC Remote: 2024-03-18 13:40:27.240477656 +0000 UTC m=+6.749879830 (delta=101.614372ms)
	I0318 13:40:27.353506 1155127 fix.go:200] guest clock delta is within tolerance: 101.614372ms
	I0318 13:40:27.353510 1155127 start.go:83] releasing machines lock for "cert-expiration-537883", held for 6.70521629s
	I0318 13:40:27.353529 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:27.353765 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetIP
	I0318 13:40:27.356359 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.356696 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:27.356719 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.356859 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:27.357341 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:27.357519 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:27.357594 1155127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:40:27.357632 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:27.357685 1155127 ssh_runner.go:195] Run: cat /version.json
	I0318 13:40:27.357699 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:27.360259 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.360552 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.360639 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:27.360675 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.360820 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:27.360861 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:27.360886 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.360964 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:27.361055 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:27.361132 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:27.361189 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:27.361245 1155127 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/cert-expiration-537883/id_rsa Username:docker}
	I0318 13:40:27.361303 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:27.361435 1155127 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/cert-expiration-537883/id_rsa Username:docker}
	I0318 13:40:27.454065 1155127 ssh_runner.go:195] Run: systemctl --version
	I0318 13:40:27.474678 1155127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:40:27.637199 1155127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:40:27.644318 1155127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:40:27.644403 1155127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:40:27.654662 1155127 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 13:40:27.654686 1155127 start.go:494] detecting cgroup driver to use...
	I0318 13:40:27.654756 1155127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:40:27.672161 1155127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:40:27.687710 1155127 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:40:27.687750 1155127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:40:27.703324 1155127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:40:27.717925 1155127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:40:27.857767 1155127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:40:27.998126 1155127 docker.go:233] disabling docker service ...
	I0318 13:40:27.998191 1155127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:40:28.015879 1155127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:40:28.030268 1155127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:40:28.173734 1155127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:40:28.309650 1155127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:40:28.325076 1155127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:40:28.346848 1155127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:40:28.346903 1155127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:40:28.359477 1155127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:40:28.359563 1155127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:40:28.370999 1155127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:40:28.382400 1155127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:40:28.393822 1155127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:40:28.406145 1155127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:40:28.417510 1155127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:40:28.427395 1155127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:40:28.577169 1155127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:40:28.837839 1155127 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:40:28.837912 1155127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:40:28.844043 1155127 start.go:562] Will wait 60s for crictl version
	I0318 13:40:28.844099 1155127 ssh_runner.go:195] Run: which crictl
	I0318 13:40:28.848688 1155127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:40:28.897551 1155127 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:40:28.897624 1155127 ssh_runner.go:195] Run: crio --version
	I0318 13:40:28.928034 1155127 ssh_runner.go:195] Run: crio --version
	I0318 13:40:28.965524 1155127 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:40:26.532932 1153618 addons.go:505] duration metric: took 5.963699ms for enable addons: enabled=[]
	I0318 13:40:26.714142 1153618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:40:26.732839 1153618 node_ready.go:35] waiting up to 6m0s for node "pause-760389" to be "Ready" ...
	I0318 13:40:26.736263 1153618 node_ready.go:49] node "pause-760389" has status "Ready":"True"
	I0318 13:40:26.736294 1153618 node_ready.go:38] duration metric: took 3.420285ms for node "pause-760389" to be "Ready" ...
	I0318 13:40:26.736307 1153618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:40:26.742177 1153618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tbmwc" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.858111 1153618 pod_ready.go:92] pod "coredns-5dd5756b68-tbmwc" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:26.858135 1153618 pod_ready.go:81] duration metric: took 115.926782ms for pod "coredns-5dd5756b68-tbmwc" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.858146 1153618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:27.261785 1153618 pod_ready.go:92] pod "etcd-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:27.261807 1153618 pod_ready.go:81] duration metric: took 403.655683ms for pod "etcd-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:27.261818 1153618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:27.657515 1153618 pod_ready.go:92] pod "kube-apiserver-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:27.657542 1153618 pod_ready.go:81] duration metric: took 395.717538ms for pod "kube-apiserver-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:27.657552 1153618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.058402 1153618 pod_ready.go:92] pod "kube-controller-manager-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:28.058429 1153618 pod_ready.go:81] duration metric: took 400.871292ms for pod "kube-controller-manager-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.058439 1153618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mmxmq" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.457093 1153618 pod_ready.go:92] pod "kube-proxy-mmxmq" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:28.457123 1153618 pod_ready.go:81] duration metric: took 398.672285ms for pod "kube-proxy-mmxmq" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.457132 1153618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.858291 1153618 pod_ready.go:92] pod "kube-scheduler-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:28.858327 1153618 pod_ready.go:81] duration metric: took 401.187387ms for pod "kube-scheduler-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.858341 1153618 pod_ready.go:38] duration metric: took 2.122020305s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:40:28.858361 1153618 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:40:28.858433 1153618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:40:28.876126 1153618 api_server.go:72] duration metric: took 2.349177499s to wait for apiserver process to appear ...
	I0318 13:40:28.876150 1153618 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:40:28.876168 1153618 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0318 13:40:28.881101 1153618 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0318 13:40:28.882317 1153618 api_server.go:141] control plane version: v1.28.4
	I0318 13:40:28.882347 1153618 api_server.go:131] duration metric: took 6.189206ms to wait for apiserver health ...
	I0318 13:40:28.882357 1153618 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:40:29.062791 1153618 system_pods.go:59] 6 kube-system pods found
	I0318 13:40:29.062830 1153618 system_pods.go:61] "coredns-5dd5756b68-tbmwc" [9f39aebe-7698-4aeb-9f8e-773dfe8d01ae] Running
	I0318 13:40:29.062836 1153618 system_pods.go:61] "etcd-pause-760389" [fb1c2278-e9f1-44c3-85e4-3e8cf62b63f0] Running
	I0318 13:40:29.062840 1153618 system_pods.go:61] "kube-apiserver-pause-760389" [cc7cfded-8931-4dab-a5e3-844cf05c4fb5] Running
	I0318 13:40:29.062845 1153618 system_pods.go:61] "kube-controller-manager-pause-760389" [30773a27-56bf-4d4a-829f-474f0f992d8c] Running
	I0318 13:40:29.062850 1153618 system_pods.go:61] "kube-proxy-mmxmq" [ab219bf0-9e1d-4170-ae1d-0c19aee8d50a] Running
	I0318 13:40:29.062854 1153618 system_pods.go:61] "kube-scheduler-pause-760389" [9fa80081-5981-47b8-9d70-8363fdb2e37c] Running
	I0318 13:40:29.062863 1153618 system_pods.go:74] duration metric: took 180.497081ms to wait for pod list to return data ...
	I0318 13:40:29.062872 1153618 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:40:29.257864 1153618 default_sa.go:45] found service account: "default"
	I0318 13:40:29.257892 1153618 default_sa.go:55] duration metric: took 195.007886ms for default service account to be created ...
	I0318 13:40:29.257902 1153618 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:40:29.461491 1153618 system_pods.go:86] 6 kube-system pods found
	I0318 13:40:29.461532 1153618 system_pods.go:89] "coredns-5dd5756b68-tbmwc" [9f39aebe-7698-4aeb-9f8e-773dfe8d01ae] Running
	I0318 13:40:29.461539 1153618 system_pods.go:89] "etcd-pause-760389" [fb1c2278-e9f1-44c3-85e4-3e8cf62b63f0] Running
	I0318 13:40:29.461545 1153618 system_pods.go:89] "kube-apiserver-pause-760389" [cc7cfded-8931-4dab-a5e3-844cf05c4fb5] Running
	I0318 13:40:29.461552 1153618 system_pods.go:89] "kube-controller-manager-pause-760389" [30773a27-56bf-4d4a-829f-474f0f992d8c] Running
	I0318 13:40:29.461558 1153618 system_pods.go:89] "kube-proxy-mmxmq" [ab219bf0-9e1d-4170-ae1d-0c19aee8d50a] Running
	I0318 13:40:29.461564 1153618 system_pods.go:89] "kube-scheduler-pause-760389" [9fa80081-5981-47b8-9d70-8363fdb2e37c] Running
	I0318 13:40:29.461574 1153618 system_pods.go:126] duration metric: took 203.665403ms to wait for k8s-apps to be running ...
	I0318 13:40:29.461583 1153618 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:40:29.461637 1153618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:40:29.484045 1153618 system_svc.go:56] duration metric: took 22.449563ms WaitForService to wait for kubelet
	I0318 13:40:29.484080 1153618 kubeadm.go:576] duration metric: took 2.957135182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:40:29.484100 1153618 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:40:29.660902 1153618 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:40:29.660931 1153618 node_conditions.go:123] node cpu capacity is 2
	I0318 13:40:29.660976 1153618 node_conditions.go:105] duration metric: took 176.86973ms to run NodePressure ...
	I0318 13:40:29.660992 1153618 start.go:240] waiting for startup goroutines ...
	I0318 13:40:29.661004 1153618 start.go:245] waiting for cluster config update ...
	I0318 13:40:29.661015 1153618 start.go:254] writing updated cluster config ...
	I0318 13:40:29.661408 1153618 ssh_runner.go:195] Run: rm -f paused
	I0318 13:40:29.726956 1153618 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:40:29.728705 1153618 out.go:177] * Done! kubectl is now configured to use "pause-760389" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.514475242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4288b49d-8581-47d9-96c4-d05f5b5fcb23 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.515062358Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82023a44680803f047157ce3e4f1b957c1ca0751b9255fda287895acca79da8c,PodSandboxId:d04640f12d10f0544ac4b0942f57fae05c233ec7aa7e3b19ec149a56d670f63b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769213891228191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca755dd1fa877e884a91d6c55b87dea3604a7d99e75514e0d321787f8747b91,PodSandboxId:8e0c7401f6c4a06b88b7e26f1af6badc3d286618645ab003170fd985e391b4a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769205754298318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b538eddcae3c73f8df74cb5b6d71fa942284bf7a1b06e6e1caf227320a5294f,PodSandboxId:c3e19712de0a2ed942b35ca7f9005ae2a0a3a17540f4724975c1e2cff7ae4497,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769205643898462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c20a4b37b8ddd5bd0e3233066f107191a05e768d2013944e90941adc3a2fc9b,PodSandboxId:78dd7aeeebbe11f851887d0944d751f89a4fe3b813fa89173cd3b5b5dbda028b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769205616858531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e4b0fde809b998f429bee8eb273aacae763c87a305f8e8e54d77d45611a31,PodSandboxId:f58821f212ea0c5fff18f1100e49b02f1b4c4e099009c709dd52749dc109c76c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769205554273432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82eec7c717ff71ffc3b406f1dd6a3068f8a6e4bc1455efd575cb56b627569aed,PodSandboxId:f0dd82a259e80e8cc4a225c8209fe398ab019909ebbe3c9fe10274abf34d944d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769205476051183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a,PodSandboxId:955db720cf0573717aa9b2b9ae7d26a5bea75fcee0593653b481ecb867699469,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769109642475749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8d
d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b9dc6324d7159773436a0616d877e6adac59f647e108f2ba4248c7807af7ef,PodSandboxId:b9a115975fa4679bb1d6d63a2a4df225756ede3aaecb2077c14422c71baa84ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769110404658084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02095178fc24b970bb1be66ba74a0f5bb027d2f88fcd2d091dd6628a075850ab,PodSandboxId:3c52e0be9c18f5b0ddca5baa4dc04772a74e999e6737bdf9d452320e4e6e1904,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769109317594171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annotations:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e15f7167231f05b0facf70970fc16630b20767e97cd2c4e95ce3855eec2904,PodSandboxId:ba951eb50329fb6026c3fbe266aa6eea81e8f8f4a43d2131fdbe539e4a36f832,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769109331297537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ceb76089cf49a3751e5ef3fa1430c9c0a22d1e5f51f63b2ff706c8ef7963629,PodSandboxId:5d97cac33c75cb8d164fea91aeb7447c001a7809d0a913c5c65eef555c75ec42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769109479194163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4d5fb2691d7b313d5ded78fed5ae997dae1078685f3104974ab9b91a27c351,PodSandboxId:4fe6bc9a0a41a75866c3535a3ab803495eb96474d8060ff8deff2b23f6858294,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769109174910127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4288b49d-8581-47d9-96c4-d05f5b5fcb23 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.582689700Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e90a190-918b-4807-bce7-30f025393806 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.582792707Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e90a190-918b-4807-bce7-30f025393806 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.587098134Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=54f5307e-87ca-422d-b989-d62cd8afb664 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.587817302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769230587782887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=54f5307e-87ca-422d-b989-d62cd8afb664 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.588892779Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=05a144d4-e751-49a1-95d7-cef269cbe062 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.588999913Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=05a144d4-e751-49a1-95d7-cef269cbe062 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.589342319Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82023a44680803f047157ce3e4f1b957c1ca0751b9255fda287895acca79da8c,PodSandboxId:d04640f12d10f0544ac4b0942f57fae05c233ec7aa7e3b19ec149a56d670f63b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769213891228191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca755dd1fa877e884a91d6c55b87dea3604a7d99e75514e0d321787f8747b91,PodSandboxId:8e0c7401f6c4a06b88b7e26f1af6badc3d286618645ab003170fd985e391b4a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769205754298318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b538eddcae3c73f8df74cb5b6d71fa942284bf7a1b06e6e1caf227320a5294f,PodSandboxId:c3e19712de0a2ed942b35ca7f9005ae2a0a3a17540f4724975c1e2cff7ae4497,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769205643898462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c20a4b37b8ddd5bd0e3233066f107191a05e768d2013944e90941adc3a2fc9b,PodSandboxId:78dd7aeeebbe11f851887d0944d751f89a4fe3b813fa89173cd3b5b5dbda028b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769205616858531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e4b0fde809b998f429bee8eb273aacae763c87a305f8e8e54d77d45611a31,PodSandboxId:f58821f212ea0c5fff18f1100e49b02f1b4c4e099009c709dd52749dc109c76c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769205554273432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82eec7c717ff71ffc3b406f1dd6a3068f8a6e4bc1455efd575cb56b627569aed,PodSandboxId:f0dd82a259e80e8cc4a225c8209fe398ab019909ebbe3c9fe10274abf34d944d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769205476051183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a,PodSandboxId:955db720cf0573717aa9b2b9ae7d26a5bea75fcee0593653b481ecb867699469,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769109642475749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8d
d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b9dc6324d7159773436a0616d877e6adac59f647e108f2ba4248c7807af7ef,PodSandboxId:b9a115975fa4679bb1d6d63a2a4df225756ede3aaecb2077c14422c71baa84ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769110404658084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02095178fc24b970bb1be66ba74a0f5bb027d2f88fcd2d091dd6628a075850ab,PodSandboxId:3c52e0be9c18f5b0ddca5baa4dc04772a74e999e6737bdf9d452320e4e6e1904,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769109317594171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annotations:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e15f7167231f05b0facf70970fc16630b20767e97cd2c4e95ce3855eec2904,PodSandboxId:ba951eb50329fb6026c3fbe266aa6eea81e8f8f4a43d2131fdbe539e4a36f832,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769109331297537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ceb76089cf49a3751e5ef3fa1430c9c0a22d1e5f51f63b2ff706c8ef7963629,PodSandboxId:5d97cac33c75cb8d164fea91aeb7447c001a7809d0a913c5c65eef555c75ec42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769109479194163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4d5fb2691d7b313d5ded78fed5ae997dae1078685f3104974ab9b91a27c351,PodSandboxId:4fe6bc9a0a41a75866c3535a3ab803495eb96474d8060ff8deff2b23f6858294,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769109174910127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=05a144d4-e751-49a1-95d7-cef269cbe062 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.602567985Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=789688ba-929a-4a0c-8fac-aa01b382ba41 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.602693578Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=789688ba-929a-4a0c-8fac-aa01b382ba41 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.663444564Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1819f14f-d67c-4b84-8d03-76c0d4bff682 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.663604230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1819f14f-d67c-4b84-8d03-76c0d4bff682 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.665378589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b1e54f55-98e8-4ab5-9f65-113c14384964 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.665978280Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769230665951724,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b1e54f55-98e8-4ab5-9f65-113c14384964 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.666590983Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6580e10b-9c56-4c11-b368-2462ae25b40c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.666685040Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6580e10b-9c56-4c11-b368-2462ae25b40c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.666930990Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82023a44680803f047157ce3e4f1b957c1ca0751b9255fda287895acca79da8c,PodSandboxId:d04640f12d10f0544ac4b0942f57fae05c233ec7aa7e3b19ec149a56d670f63b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769213891228191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca755dd1fa877e884a91d6c55b87dea3604a7d99e75514e0d321787f8747b91,PodSandboxId:8e0c7401f6c4a06b88b7e26f1af6badc3d286618645ab003170fd985e391b4a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769205754298318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b538eddcae3c73f8df74cb5b6d71fa942284bf7a1b06e6e1caf227320a5294f,PodSandboxId:c3e19712de0a2ed942b35ca7f9005ae2a0a3a17540f4724975c1e2cff7ae4497,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769205643898462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c20a4b37b8ddd5bd0e3233066f107191a05e768d2013944e90941adc3a2fc9b,PodSandboxId:78dd7aeeebbe11f851887d0944d751f89a4fe3b813fa89173cd3b5b5dbda028b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769205616858531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e4b0fde809b998f429bee8eb273aacae763c87a305f8e8e54d77d45611a31,PodSandboxId:f58821f212ea0c5fff18f1100e49b02f1b4c4e099009c709dd52749dc109c76c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769205554273432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82eec7c717ff71ffc3b406f1dd6a3068f8a6e4bc1455efd575cb56b627569aed,PodSandboxId:f0dd82a259e80e8cc4a225c8209fe398ab019909ebbe3c9fe10274abf34d944d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769205476051183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a,PodSandboxId:955db720cf0573717aa9b2b9ae7d26a5bea75fcee0593653b481ecb867699469,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769109642475749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8d
d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b9dc6324d7159773436a0616d877e6adac59f647e108f2ba4248c7807af7ef,PodSandboxId:b9a115975fa4679bb1d6d63a2a4df225756ede3aaecb2077c14422c71baa84ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769110404658084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02095178fc24b970bb1be66ba74a0f5bb027d2f88fcd2d091dd6628a075850ab,PodSandboxId:3c52e0be9c18f5b0ddca5baa4dc04772a74e999e6737bdf9d452320e4e6e1904,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769109317594171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annotations:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e15f7167231f05b0facf70970fc16630b20767e97cd2c4e95ce3855eec2904,PodSandboxId:ba951eb50329fb6026c3fbe266aa6eea81e8f8f4a43d2131fdbe539e4a36f832,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769109331297537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ceb76089cf49a3751e5ef3fa1430c9c0a22d1e5f51f63b2ff706c8ef7963629,PodSandboxId:5d97cac33c75cb8d164fea91aeb7447c001a7809d0a913c5c65eef555c75ec42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769109479194163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4d5fb2691d7b313d5ded78fed5ae997dae1078685f3104974ab9b91a27c351,PodSandboxId:4fe6bc9a0a41a75866c3535a3ab803495eb96474d8060ff8deff2b23f6858294,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769109174910127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6580e10b-9c56-4c11-b368-2462ae25b40c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.715483067Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=60f31f9f-04a6-4d26-bdb8-1c3afd2da244 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.715646391Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=60f31f9f-04a6-4d26-bdb8-1c3afd2da244 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.716867959Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e3d619b3-9bf2-4277-982d-69b5bf239256 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.717689553Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769230717661044,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e3d619b3-9bf2-4277-982d-69b5bf239256 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.718269981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9a70b4e9-ca1a-49de-8adc-76040af1f66e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.718354687Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9a70b4e9-ca1a-49de-8adc-76040af1f66e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:30 pause-760389 crio[2648]: time="2024-03-18 13:40:30.718685684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82023a44680803f047157ce3e4f1b957c1ca0751b9255fda287895acca79da8c,PodSandboxId:d04640f12d10f0544ac4b0942f57fae05c233ec7aa7e3b19ec149a56d670f63b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769213891228191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca755dd1fa877e884a91d6c55b87dea3604a7d99e75514e0d321787f8747b91,PodSandboxId:8e0c7401f6c4a06b88b7e26f1af6badc3d286618645ab003170fd985e391b4a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769205754298318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b538eddcae3c73f8df74cb5b6d71fa942284bf7a1b06e6e1caf227320a5294f,PodSandboxId:c3e19712de0a2ed942b35ca7f9005ae2a0a3a17540f4724975c1e2cff7ae4497,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769205643898462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c20a4b37b8ddd5bd0e3233066f107191a05e768d2013944e90941adc3a2fc9b,PodSandboxId:78dd7aeeebbe11f851887d0944d751f89a4fe3b813fa89173cd3b5b5dbda028b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769205616858531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e4b0fde809b998f429bee8eb273aacae763c87a305f8e8e54d77d45611a31,PodSandboxId:f58821f212ea0c5fff18f1100e49b02f1b4c4e099009c709dd52749dc109c76c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769205554273432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82eec7c717ff71ffc3b406f1dd6a3068f8a6e4bc1455efd575cb56b627569aed,PodSandboxId:f0dd82a259e80e8cc4a225c8209fe398ab019909ebbe3c9fe10274abf34d944d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769205476051183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a,PodSandboxId:955db720cf0573717aa9b2b9ae7d26a5bea75fcee0593653b481ecb867699469,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769109642475749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8d
d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b9dc6324d7159773436a0616d877e6adac59f647e108f2ba4248c7807af7ef,PodSandboxId:b9a115975fa4679bb1d6d63a2a4df225756ede3aaecb2077c14422c71baa84ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769110404658084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02095178fc24b970bb1be66ba74a0f5bb027d2f88fcd2d091dd6628a075850ab,PodSandboxId:3c52e0be9c18f5b0ddca5baa4dc04772a74e999e6737bdf9d452320e4e6e1904,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769109317594171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annotations:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e15f7167231f05b0facf70970fc16630b20767e97cd2c4e95ce3855eec2904,PodSandboxId:ba951eb50329fb6026c3fbe266aa6eea81e8f8f4a43d2131fdbe539e4a36f832,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769109331297537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ceb76089cf49a3751e5ef3fa1430c9c0a22d1e5f51f63b2ff706c8ef7963629,PodSandboxId:5d97cac33c75cb8d164fea91aeb7447c001a7809d0a913c5c65eef555c75ec42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769109479194163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4d5fb2691d7b313d5ded78fed5ae997dae1078685f3104974ab9b91a27c351,PodSandboxId:4fe6bc9a0a41a75866c3535a3ab803495eb96474d8060ff8deff2b23f6858294,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769109174910127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9a70b4e9-ca1a-49de-8adc-76040af1f66e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	82023a4468080       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   16 seconds ago      Running             kube-proxy                2                   d04640f12d10f       kube-proxy-mmxmq
	dca755dd1fa87       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   25 seconds ago      Running             coredns                   2                   8e0c7401f6c4a       coredns-5dd5756b68-tbmwc
	9b538eddcae3c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   25 seconds ago      Running             etcd                      2                   c3e19712de0a2       etcd-pause-760389
	5c20a4b37b8dd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   25 seconds ago      Running             kube-controller-manager   2                   78dd7aeeebbe1       kube-controller-manager-pause-760389
	c40e4b0fde809       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   25 seconds ago      Running             kube-apiserver            2                   f58821f212ea0       kube-apiserver-pause-760389
	82eec7c717ff7       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   25 seconds ago      Running             kube-scheduler            2                   f0dd82a259e80       kube-scheduler-pause-760389
	65b9dc6324d71       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   2 minutes ago       Exited              coredns                   1                   b9a115975fa46       coredns-5dd5756b68-tbmwc
	039afcd9c06d2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   2 minutes ago       Exited              kube-proxy                1                   955db720cf057       kube-proxy-mmxmq
	6ceb76089cf49       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   2 minutes ago       Exited              kube-controller-manager   1                   5d97cac33c75c       kube-controller-manager-pause-760389
	c2e15f7167231       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   2 minutes ago       Exited              kube-scheduler            1                   ba951eb50329f       kube-scheduler-pause-760389
	02095178fc24b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   2 minutes ago       Exited              etcd                      1                   3c52e0be9c18f       etcd-pause-760389
	0c4d5fb2691d7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   2 minutes ago       Exited              kube-apiserver            1                   4fe6bc9a0a41a       kube-apiserver-pause-760389
	
	
	==> coredns [65b9dc6324d7159773436a0616d877e6adac59f647e108f2ba4248c7807af7ef] <==
	
	
	==> coredns [dca755dd1fa877e884a91d6c55b87dea3604a7d99e75514e0d321787f8747b91] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33356 - 54028 "HINFO IN 730236770335652900.5080579832071075224. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015469045s
	
	
	==> describe nodes <==
	Name:               pause-760389
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-760389
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=pause-760389
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_36_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:36:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-760389
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:40:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:40:10 +0000   Mon, 18 Mar 2024 13:36:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:40:10 +0000   Mon, 18 Mar 2024 13:36:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:40:10 +0000   Mon, 18 Mar 2024 13:36:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:40:10 +0000   Mon, 18 Mar 2024 13:36:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.203
	  Hostname:    pause-760389
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 55edccbfa758471ba01c4f0747714d5c
	  System UUID:                55edccbf-a758-471b-a01c-4f0747714d5c
	  Boot ID:                    ec7704d9-cea1-486b-a101-21f7a1c34545
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-tbmwc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m25s
	  kube-system                 etcd-pause-760389                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-apiserver-pause-760389             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	  kube-system                 kube-controller-manager-pause-760389    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m40s
	  kube-system                 kube-proxy-mmxmq                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 kube-scheduler-pause-760389             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3m22s              kube-proxy       
	  Normal  Starting                 16s                kube-proxy       
	  Normal  NodeAllocatableEnforced  3m39s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m39s              kubelet          Node pause-760389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m39s              kubelet          Node pause-760389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m39s              kubelet          Node pause-760389 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m39s              kubelet          Node pause-760389 status is now: NodeReady
	  Normal  Starting                 3m39s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           3m27s              node-controller  Node pause-760389 event: Registered Node pause-760389 in Controller
	  Normal  Starting                 22s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node pause-760389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node pause-760389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22s (x7 over 22s)  kubelet          Node pause-760389 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                 node-controller  Node pause-760389 event: Registered Node pause-760389 in Controller
	
	
	==> dmesg <==
	[  +0.070215] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.188819] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.140356] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.250056] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +5.529106] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.068786] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.295231] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.076491] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.223765] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.083489] kauditd_printk_skb: 69 callbacks suppressed
	[Mar18 13:37] systemd-fstab-generator[1488]: Ignoring "noauto" option for root device
	[  +0.158355] kauditd_printk_skb: 21 callbacks suppressed
	[ +41.292400] kauditd_printk_skb: 61 callbacks suppressed
	[Mar18 13:38] systemd-fstab-generator[2280]: Ignoring "noauto" option for root device
	[  +0.327516] systemd-fstab-generator[2374]: Ignoring "noauto" option for root device
	[  +0.365809] systemd-fstab-generator[2456]: Ignoring "noauto" option for root device
	[  +0.224152] systemd-fstab-generator[2472]: Ignoring "noauto" option for root device
	[  +0.493346] systemd-fstab-generator[2564]: Ignoring "noauto" option for root device
	[Mar18 13:40] systemd-fstab-generator[2862]: Ignoring "noauto" option for root device
	[  +0.094242] kauditd_printk_skb: 169 callbacks suppressed
	[  +6.301065] systemd-fstab-generator[3408]: Ignoring "noauto" option for root device
	[  +0.086335] kauditd_printk_skb: 71 callbacks suppressed
	[  +5.059438] kauditd_printk_skb: 24 callbacks suppressed
	[  +8.454169] kauditd_printk_skb: 2 callbacks suppressed
	[  +4.291176] systemd-fstab-generator[3717]: Ignoring "noauto" option for root device
	
	
	==> etcd [02095178fc24b970bb1be66ba74a0f5bb027d2f88fcd2d091dd6628a075850ab] <==
	{"level":"warn","ts":"2024-03-18T13:38:30.833557Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-18T13:38:30.833642Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.203:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.203:2380","--initial-cluster=pause-760389=https://192.168.50.203:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.203:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.203:2380","--name=pause-760389","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-03-18T13:38:30.838224Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-03-18T13:38:30.838301Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-18T13:38:30.838316Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.203:2380"]}
	{"level":"info","ts":"2024-03-18T13:38:30.838466Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T13:38:30.870647Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.203:2379"]}
	{"level":"info","ts":"2024-03-18T13:38:30.872698Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-760389","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.203:2380"],"listen-peer-urls":["https://192.168.50.203:2380"],"advertise-client-urls":["https://192.168.50.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-clus
ter-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-03-18T13:38:31.018392Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"93.408141ms"}
	
	
	==> etcd [9b538eddcae3c73f8df74cb5b6d71fa942284bf7a1b06e6e1caf227320a5294f] <==
	{"level":"info","ts":"2024-03-18T13:40:06.38846Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9b890156e43d782c","initial-advertise-peer-urls":["https://192.168.50.203:2380"],"listen-peer-urls":["https://192.168.50.203:2380"],"advertise-client-urls":["https://192.168.50.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T13:40:06.390693Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T13:40:06.390909Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.203:2380"}
	{"level":"info","ts":"2024-03-18T13:40:06.390942Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.203:2380"}
	{"level":"info","ts":"2024-03-18T13:40:06.391379Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:40:06.391441Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:40:06.391452Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:40:06.392436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c switched to configuration voters=(11207490620396238892)"}
	{"level":"info","ts":"2024-03-18T13:40:06.392692Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2f9dfc9eaa0376c8","local-member-id":"9b890156e43d782c","added-peer-id":"9b890156e43d782c","added-peer-peer-urls":["https://192.168.50.203:2380"]}
	{"level":"info","ts":"2024-03-18T13:40:06.393079Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2f9dfc9eaa0376c8","local-member-id":"9b890156e43d782c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:40:06.393203Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:40:07.472318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T13:40:07.472362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:40:07.472377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c received MsgPreVoteResp from 9b890156e43d782c at term 2"}
	{"level":"info","ts":"2024-03-18T13:40:07.472388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T13:40:07.472394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c received MsgVoteResp from 9b890156e43d782c at term 3"}
	{"level":"info","ts":"2024-03-18T13:40:07.472403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c became leader at term 3"}
	{"level":"info","ts":"2024-03-18T13:40:07.47241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b890156e43d782c elected leader 9b890156e43d782c at term 3"}
	{"level":"info","ts":"2024-03-18T13:40:07.479948Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:40:07.481033Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.203:2379"}
	{"level":"info","ts":"2024-03-18T13:40:07.481317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:40:07.484365Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:40:07.479884Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9b890156e43d782c","local-member-attributes":"{Name:pause-760389 ClientURLs:[https://192.168.50.203:2379]}","request-path":"/0/members/9b890156e43d782c/attributes","cluster-id":"2f9dfc9eaa0376c8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:40:07.500601Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:40:07.500659Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:40:31 up 4 min,  0 users,  load average: 0.43, 0.31, 0.14
	Linux pause-760389 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0c4d5fb2691d7b313d5ded78fed5ae997dae1078685f3104974ab9b91a27c351] <==
	I0318 13:38:30.178132       1 options.go:220] external host was not specified, using 192.168.50.203
	I0318 13:38:30.179772       1 server.go:148] Version: v1.28.4
	I0318 13:38:30.179822       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [c40e4b0fde809b998f429bee8eb273aacae763c87a305f8e8e54d77d45611a31] <==
	I0318 13:40:09.833884       1 establishing_controller.go:76] Starting EstablishingController
	I0318 13:40:09.833918       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 13:40:09.833961       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:40:09.833978       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:40:09.984835       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:40:10.031704       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:40:10.031763       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:40:10.038159       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:40:10.038459       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:40:10.038580       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:40:10.038587       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:40:10.038712       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:40:10.039315       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:40:10.039366       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:40:10.039372       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:40:10.039378       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:40:10.039638       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:40:10.838615       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:40:11.553940       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:40:11.568369       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:40:11.610218       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:40:11.643649       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:40:11.653085       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:40:22.263845       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:40:22.265072       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5c20a4b37b8ddd5bd0e3233066f107191a05e768d2013944e90941adc3a2fc9b] <==
	I0318 13:40:22.246554       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:40:22.249782       1 shared_informer.go:318] Caches are synced for job
	I0318 13:40:22.249900       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:40:22.251914       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:40:22.251967       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:40:22.254650       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:40:22.254777       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:40:22.254891       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-760389"
	I0318 13:40:22.254978       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 13:40:22.255023       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:40:22.255064       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:40:22.255650       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:40:22.258570       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:40:22.258739       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.713µs"
	I0318 13:40:22.258826       1 event.go:307] "Event occurred" object="pause-760389" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-760389 event: Registered Node pause-760389 in Controller"
	I0318 13:40:22.277294       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:40:22.341890       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:40:22.364785       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:40:22.399950       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:40:22.405016       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:40:22.410417       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:40:22.426035       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:40:22.797420       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:40:22.801888       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:40:22.801936       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [6ceb76089cf49a3751e5ef3fa1430c9c0a22d1e5f51f63b2ff706c8ef7963629] <==
	
	
	==> kube-proxy [039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a] <==
	command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a": Process exited with status 1
	stdout:
	
	stderr:
	E0318 13:40:33.449815    3854 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a"
	time="2024-03-18T13:40:33Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	
	==> kube-proxy [82023a44680803f047157ce3e4f1b957c1ca0751b9255fda287895acca79da8c] <==
	I0318 13:40:14.036438       1 server_others.go:69] "Using iptables proxy"
	I0318 13:40:14.048480       1 node.go:141] Successfully retrieved node IP: 192.168.50.203
	I0318 13:40:14.088254       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:40:14.088272       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:40:14.091060       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:40:14.091148       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:40:14.091377       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:40:14.091416       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:40:14.092404       1 config.go:188] "Starting service config controller"
	I0318 13:40:14.092464       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:40:14.092570       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:40:14.092626       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:40:14.093120       1 config.go:315] "Starting node config controller"
	I0318 13:40:14.093156       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:40:14.193743       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:40:14.193770       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:40:14.193791       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [82eec7c717ff71ffc3b406f1dd6a3068f8a6e4bc1455efd575cb56b627569aed] <==
	I0318 13:40:06.937758       1 serving.go:348] Generated self-signed cert in-memory
	W0318 13:40:09.943793       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 13:40:09.943851       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:40:09.943862       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 13:40:09.943868       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:40:09.990677       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:40:09.990726       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:40:09.994110       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:40:09.994322       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:40:09.994384       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:40:09.994411       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:40:10.095071       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c2e15f7167231f05b0facf70970fc16630b20767e97cd2c4e95ce3855eec2904] <==
	
	
	==> kubelet <==
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.058155    3415 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.065808    3415 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076428    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50b52de4312105ea86e125bf42bf7a05-kubeconfig\") pod \"kube-controller-manager-pause-760389\" (UID: \"50b52de4312105ea86e125bf42bf7a05\") " pod="kube-system/kube-controller-manager-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076540    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30766ec601e69c24ee68a884fdb41d11-kubeconfig\") pod \"kube-scheduler-pause-760389\" (UID: \"30766ec601e69c24ee68a884fdb41d11\") " pod="kube-system/kube-scheduler-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076633    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ca459955938361bb7b7557c4ac7dc7a-ca-certs\") pod \"kube-apiserver-pause-760389\" (UID: \"9ca459955938361bb7b7557c4ac7dc7a\") " pod="kube-system/kube-apiserver-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076683    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50b52de4312105ea86e125bf42bf7a05-ca-certs\") pod \"kube-controller-manager-pause-760389\" (UID: \"50b52de4312105ea86e125bf42bf7a05\") " pod="kube-system/kube-controller-manager-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076708    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50b52de4312105ea86e125bf42bf7a05-flexvolume-dir\") pod \"kube-controller-manager-pause-760389\" (UID: \"50b52de4312105ea86e125bf42bf7a05\") " pod="kube-system/kube-controller-manager-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076735    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/12c5881c90dc32f884818aa1844fc13f-etcd-data\") pod \"etcd-pause-760389\" (UID: \"12c5881c90dc32f884818aa1844fc13f\") " pod="kube-system/etcd-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076751    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ca459955938361bb7b7557c4ac7dc7a-k8s-certs\") pod \"kube-apiserver-pause-760389\" (UID: \"9ca459955938361bb7b7557c4ac7dc7a\") " pod="kube-system/kube-apiserver-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076768    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50b52de4312105ea86e125bf42bf7a05-k8s-certs\") pod \"kube-controller-manager-pause-760389\" (UID: \"50b52de4312105ea86e125bf42bf7a05\") " pod="kube-system/kube-controller-manager-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076821    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50b52de4312105ea86e125bf42bf7a05-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-760389\" (UID: \"50b52de4312105ea86e125bf42bf7a05\") " pod="kube-system/kube-controller-manager-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076840    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-lib-modules\") pod \"kube-proxy-mmxmq\" (UID: \"ab219bf0-9e1d-4170-ae1d-0c19aee8d50a\") " pod="kube-system/kube-proxy-mmxmq"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076859    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ca459955938361bb7b7557c4ac7dc7a-usr-share-ca-certificates\") pod \"kube-apiserver-pause-760389\" (UID: \"9ca459955938361bb7b7557c4ac7dc7a\") " pod="kube-system/kube-apiserver-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076882    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/12c5881c90dc32f884818aa1844fc13f-etcd-certs\") pod \"etcd-pause-760389\" (UID: \"12c5881c90dc32f884818aa1844fc13f\") " pod="kube-system/etcd-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076901    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-xtables-lock\") pod \"kube-proxy-mmxmq\" (UID: \"ab219bf0-9e1d-4170-ae1d-0c19aee8d50a\") " pod="kube-system/kube-proxy-mmxmq"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: E0318 13:40:10.077027    3415 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:10 pause-760389 kubelet[3415]: E0318 13:40:10.077224    3415 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy podName:ab219bf0-9e1d-4170-ae1d-0c19aee8d50a nodeName:}" failed. No retries permitted until 2024-03-18 13:40:10.577094547 +0000 UTC m=+1.762981734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy") pod "kube-proxy-mmxmq" (UID: "ab219bf0-9e1d-4170-ae1d-0c19aee8d50a") : object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:10 pause-760389 kubelet[3415]: E0318 13:40:10.580768    3415 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:10 pause-760389 kubelet[3415]: E0318 13:40:10.581017    3415 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy podName:ab219bf0-9e1d-4170-ae1d-0c19aee8d50a nodeName:}" failed. No retries permitted until 2024-03-18 13:40:11.580988707 +0000 UTC m=+2.766875881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy") pod "kube-proxy-mmxmq" (UID: "ab219bf0-9e1d-4170-ae1d-0c19aee8d50a") : object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:11 pause-760389 kubelet[3415]: E0318 13:40:11.589070    3415 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:11 pause-760389 kubelet[3415]: E0318 13:40:11.589160    3415 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy podName:ab219bf0-9e1d-4170-ae1d-0c19aee8d50a nodeName:}" failed. No retries permitted until 2024-03-18 13:40:13.589138014 +0000 UTC m=+4.775025210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy") pod "kube-proxy-mmxmq" (UID: "ab219bf0-9e1d-4170-ae1d-0c19aee8d50a") : object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:13 pause-760389 kubelet[3415]: E0318 13:40:13.159003    3415 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f36e8110823805ddb71952396ef56fb131eec48da696e248a3a8899dbc28f18b\": container with ID starting with f36e8110823805ddb71952396ef56fb131eec48da696e248a3a8899dbc28f18b not found: ID does not exist" containerID="f36e8110823805ddb71952396ef56fb131eec48da696e248a3a8899dbc28f18b"
	Mar 18 13:40:13 pause-760389 kubelet[3415]: I0318 13:40:13.159251    3415 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f36e8110823805ddb71952396ef56fb131eec48da696e248a3a8899dbc28f18b"
	Mar 18 13:40:13 pause-760389 kubelet[3415]: I0318 13:40:13.159277    3415 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955db720cf0573717aa9b2b9ae7d26a5bea75fcee0593653b481ecb867699469"
	Mar 18 13:40:13 pause-760389 kubelet[3415]: I0318 13:40:13.159286    3415 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08d944f9cafd0ab742174c4bad1a2a4dac900f7d7e24e407184aac822e4ecb08"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-760389 -n pause-760389
helpers_test.go:261: (dbg) Run:  kubectl --context pause-760389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-760389 -n pause-760389
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-760389 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-760389 logs -n 25: (3.505087554s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-990886 sudo cat             | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | /etc/containerd/config.toml           |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo                 | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | containerd config dump                |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo                 | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | systemctl status crio --all           |                           |         |         |                     |                     |
	|         | --full --no-pager                     |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo                 | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | systemctl cat crio --no-pager         |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo find            | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | /etc/crio -type f -exec sh -c         |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-990886 sudo crio            | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-990886                      | cilium-990886             | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC | 18 Mar 24 13:35 UTC |
	| start   | -p cert-expiration-537883             | cert-expiration-537883    | jenkins | v1.32.0 | 18 Mar 24 13:35 UTC | 18 Mar 24 13:37 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-375732           | force-systemd-env-375732  | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:36 UTC |
	| start   | -p force-systemd-flag-042940          | force-systemd-flag-042940 | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:37 UTC |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-599578          | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:36 UTC |
	| start   | -p kubernetes-upgrade-599578          | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:36 UTC | 18 Mar 24 13:38 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-042940 ssh cat     | force-systemd-flag-042940 | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC | 18 Mar 24 13:37 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-042940          | force-systemd-flag-042940 | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC | 18 Mar 24 13:37 UTC |
	| start   | -p cert-options-959907                | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC | 18 Mar 24 13:38 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p pause-760389                       | pause-760389              | jenkins | v1.32.0 | 18 Mar 24 13:37 UTC | 18 Mar 24 13:40 UTC |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-599578          | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-599578          | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:39 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-959907 ssh               | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:38 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-959907 -- sudo        | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:38 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-959907                | cert-options-959907       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC | 18 Mar 24 13:38 UTC |
	| start   | -p old-k8s-version-909137             | old-k8s-version-909137    | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-599578          | kubernetes-upgrade-599578 | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:39 UTC |
	| start   | -p no-preload-537236                  | no-preload-537236         | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC |                     |
	|         | --memory=2200 --alsologtostderr       |                           |         |         |                     |                     |
	|         | --wait=true --preload=false           |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	| start   | -p cert-expiration-537883             | cert-expiration-537883    | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:40:20
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:40:20.541843 1155127 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:40:20.542126 1155127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:20.542132 1155127 out.go:304] Setting ErrFile to fd 2...
	I0318 13:40:20.542135 1155127 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:40:20.542296 1155127 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:40:20.542836 1155127 out.go:298] Setting JSON to false
	I0318 13:40:20.543999 1155127 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19367,"bootTime":1710749853,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:40:20.544057 1155127 start.go:139] virtualization: kvm guest
	I0318 13:40:20.547211 1155127 out.go:177] * [cert-expiration-537883] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:40:20.548931 1155127 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:40:20.548947 1155127 notify.go:220] Checking for updates...
	I0318 13:40:20.550497 1155127 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:40:20.551726 1155127 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:40:20.553010 1155127 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:40:20.554355 1155127 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:40:20.555504 1155127 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:40:20.557159 1155127 config.go:182] Loaded profile config "cert-expiration-537883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:40:20.557515 1155127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:40:20.557565 1155127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:40:20.572602 1155127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41087
	I0318 13:40:20.573017 1155127 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:40:20.573514 1155127 main.go:141] libmachine: Using API Version  1
	I0318 13:40:20.573531 1155127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:40:20.573900 1155127 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:40:20.574136 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:20.574385 1155127 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:40:20.574661 1155127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:40:20.574692 1155127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:40:20.590110 1155127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44151
	I0318 13:40:20.590552 1155127 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:40:20.591043 1155127 main.go:141] libmachine: Using API Version  1
	I0318 13:40:20.591064 1155127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:40:20.591372 1155127 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:40:20.591560 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:20.626799 1155127 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:40:20.628211 1155127 start.go:297] selected driver: kvm2
	I0318 13:40:20.628230 1155127 start.go:901] validating driver "kvm2" against &{Name:cert-expiration-537883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:cert-expiration-537883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:40:20.628409 1155127 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:40:20.629133 1155127 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:40:20.629199 1155127 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:40:20.644035 1155127 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:40:20.644434 1155127 cni.go:84] Creating CNI manager for ""
	I0318 13:40:20.644449 1155127 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:40:20.644523 1155127 start.go:340] cluster config:
	{Name:cert-expiration-537883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-537883 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:40:20.644621 1155127 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:40:20.646451 1155127 out.go:177] * Starting "cert-expiration-537883" primary control-plane node in "cert-expiration-537883" cluster
	I0318 13:40:20.647867 1155127 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:40:20.647890 1155127 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:40:20.647895 1155127 cache.go:56] Caching tarball of preloaded images
	I0318 13:40:20.647981 1155127 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:40:20.647987 1155127 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:40:20.648075 1155127 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/config.json ...
	I0318 13:40:20.648247 1155127 start.go:360] acquireMachinesLock for cert-expiration-537883: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:40:20.648289 1155127 start.go:364] duration metric: took 30.913µs to acquireMachinesLock for "cert-expiration-537883"
	I0318 13:40:20.648304 1155127 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:40:20.648310 1155127 fix.go:54] fixHost starting: 
	I0318 13:40:20.648747 1155127 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:40:20.648778 1155127 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:40:20.662691 1155127 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35443
	I0318 13:40:20.663095 1155127 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:40:20.663582 1155127 main.go:141] libmachine: Using API Version  1
	I0318 13:40:20.663591 1155127 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:40:20.663869 1155127 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:40:20.664055 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:20.664184 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetState
	I0318 13:40:20.665820 1155127 fix.go:112] recreateIfNeeded on cert-expiration-537883: state=Running err=<nil>
	W0318 13:40:20.665831 1155127 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:40:20.667598 1155127 out.go:177] * Updating the running kvm2 "cert-expiration-537883" VM ...
	I0318 13:40:21.460120 1153618 pod_ready.go:102] pod "kube-apiserver-pause-760389" in "kube-system" namespace has status "Ready":"False"
	I0318 13:40:23.460867 1153618 pod_ready.go:102] pod "kube-apiserver-pause-760389" in "kube-system" namespace has status "Ready":"False"
	I0318 13:40:20.668937 1155127 machine.go:94] provisionDockerMachine start ...
	I0318 13:40:20.668951 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:20.669161 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:20.671836 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.672209 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:20.672229 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.672406 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:20.672576 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.672720 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.672827 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:20.672975 1155127 main.go:141] libmachine: Using SSH client type: native
	I0318 13:40:20.673202 1155127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0318 13:40:20.673209 1155127 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:40:20.790830 1155127 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-537883
	
	I0318 13:40:20.790851 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetMachineName
	I0318 13:40:20.791120 1155127 buildroot.go:166] provisioning hostname "cert-expiration-537883"
	I0318 13:40:20.791144 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetMachineName
	I0318 13:40:20.791324 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:20.794221 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.794587 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:20.794618 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.794740 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:20.794938 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.795095 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.795200 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:20.795317 1155127 main.go:141] libmachine: Using SSH client type: native
	I0318 13:40:20.795531 1155127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0318 13:40:20.795541 1155127 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-537883 && echo "cert-expiration-537883" | sudo tee /etc/hostname
	I0318 13:40:20.924016 1155127 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-537883
	
	I0318 13:40:20.924047 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:20.926804 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.927165 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:20.927190 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:20.927350 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:20.927538 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.927712 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:20.927843 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:20.927991 1155127 main.go:141] libmachine: Using SSH client type: native
	I0318 13:40:20.928164 1155127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0318 13:40:20.928208 1155127 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-537883' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-537883/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-537883' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:40:21.041802 1155127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:40:21.041822 1155127 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:40:21.041874 1155127 buildroot.go:174] setting up certificates
	I0318 13:40:21.041886 1155127 provision.go:84] configureAuth start
	I0318 13:40:21.041895 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetMachineName
	I0318 13:40:21.042206 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetIP
	I0318 13:40:21.045049 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.045332 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:21.045351 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.045478 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:21.047770 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.048141 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:21.048158 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.048252 1155127 provision.go:143] copyHostCerts
	I0318 13:40:21.048313 1155127 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:40:21.048319 1155127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:40:21.048417 1155127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:40:21.048501 1155127 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:40:21.048505 1155127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:40:21.048528 1155127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:40:21.048584 1155127 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:40:21.048587 1155127 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:40:21.048605 1155127 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:40:21.048644 1155127 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-537883 san=[127.0.0.1 192.168.61.50 cert-expiration-537883 localhost minikube]
	I0318 13:40:21.259201 1155127 provision.go:177] copyRemoteCerts
	I0318 13:40:21.259253 1155127 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:40:21.259275 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:21.262181 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.262510 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:21.262530 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.262689 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:21.262913 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:21.263087 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:21.263189 1155127 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/cert-expiration-537883/id_rsa Username:docker}
	I0318 13:40:21.353309 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:40:21.382293 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 13:40:21.410426 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:40:21.438863 1155127 provision.go:87] duration metric: took 396.965346ms to configureAuth
	I0318 13:40:21.438888 1155127 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:40:21.439081 1155127 config.go:182] Loaded profile config "cert-expiration-537883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:40:21.439155 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:21.442179 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.442555 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:21.442572 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:21.442720 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:21.442950 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:21.443119 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:21.443306 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:21.443505 1155127 main.go:141] libmachine: Using SSH client type: native
	I0318 13:40:21.443708 1155127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0318 13:40:21.443718 1155127 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:40:25.962983 1153618 pod_ready.go:102] pod "kube-apiserver-pause-760389" in "kube-system" namespace has status "Ready":"False"
	I0318 13:40:26.470723 1153618 pod_ready.go:92] pod "kube-apiserver-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:26.470754 1153618 pod_ready.go:81] duration metric: took 7.018025307s for pod "kube-apiserver-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.470768 1153618 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.478990 1153618 pod_ready.go:92] pod "kube-controller-manager-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:26.479016 1153618 pod_ready.go:81] duration metric: took 8.238913ms for pod "kube-controller-manager-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.479027 1153618 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mmxmq" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.489669 1153618 pod_ready.go:92] pod "kube-proxy-mmxmq" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:26.489694 1153618 pod_ready.go:81] duration metric: took 10.659006ms for pod "kube-proxy-mmxmq" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.489702 1153618 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.509269 1153618 pod_ready.go:92] pod "kube-scheduler-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:26.509298 1153618 pod_ready.go:81] duration metric: took 19.588627ms for pod "kube-scheduler-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.509307 1153618 pod_ready.go:38] duration metric: took 11.075178423s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:40:26.509330 1153618 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:40:26.525179 1153618 ops.go:34] apiserver oom_adj: -16
	I0318 13:40:26.525237 1153618 kubeadm.go:591] duration metric: took 23.416603733s to restartPrimaryControlPlane
	I0318 13:40:26.525251 1153618 kubeadm.go:393] duration metric: took 23.53218508s to StartCluster
	I0318 13:40:26.525272 1153618 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:40:26.525362 1153618 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:40:26.526662 1153618 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:40:26.526916 1153618 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.203 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:40:26.528899 1153618 out.go:177] * Verifying Kubernetes components...
	I0318 13:40:26.526981 1153618 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:40:26.527202 1153618 config.go:182] Loaded profile config "pause-760389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:40:26.530420 1153618 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:40:26.531726 1153618 out.go:177] * Enabled addons: 
	I0318 13:40:27.105498 1155127 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:40:27.105513 1155127 machine.go:97] duration metric: took 6.436568523s to provisionDockerMachine
	I0318 13:40:27.105524 1155127 start.go:293] postStartSetup for "cert-expiration-537883" (driver="kvm2")
	I0318 13:40:27.105534 1155127 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:40:27.105550 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:27.106012 1155127 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:40:27.106045 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:27.108744 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.109068 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:27.109086 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.109303 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:27.109493 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:27.109672 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:27.109790 1155127 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/cert-expiration-537883/id_rsa Username:docker}
	I0318 13:40:27.196215 1155127 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:40:27.201494 1155127 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:40:27.201509 1155127 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:40:27.201569 1155127 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:40:27.201635 1155127 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:40:27.201715 1155127 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:40:27.213124 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:40:27.240439 1155127 start.go:296] duration metric: took 134.896944ms for postStartSetup
	I0318 13:40:27.240474 1155127 fix.go:56] duration metric: took 6.592164667s for fixHost
	I0318 13:40:27.240528 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:27.243152 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.243453 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:27.243480 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.243648 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:27.243805 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:27.243915 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:27.244007 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:27.244197 1155127 main.go:141] libmachine: Using SSH client type: native
	I0318 13:40:27.244428 1155127 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.50 22 <nil> <nil>}
	I0318 13:40:27.244433 1155127 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:40:27.353442 1155127 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769227.342092028
	
	I0318 13:40:27.353458 1155127 fix.go:216] guest clock: 1710769227.342092028
	I0318 13:40:27.353468 1155127 fix.go:229] Guest: 2024-03-18 13:40:27.342092028 +0000 UTC Remote: 2024-03-18 13:40:27.240477656 +0000 UTC m=+6.749879830 (delta=101.614372ms)
	I0318 13:40:27.353506 1155127 fix.go:200] guest clock delta is within tolerance: 101.614372ms
	I0318 13:40:27.353510 1155127 start.go:83] releasing machines lock for "cert-expiration-537883", held for 6.70521629s
	I0318 13:40:27.353529 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:27.353765 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetIP
	I0318 13:40:27.356359 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.356696 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:27.356719 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.356859 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:27.357341 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:27.357519 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .DriverName
	I0318 13:40:27.357594 1155127 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:40:27.357632 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:27.357685 1155127 ssh_runner.go:195] Run: cat /version.json
	I0318 13:40:27.357699 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHHostname
	I0318 13:40:27.360259 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.360552 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.360639 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:27.360675 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.360820 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:27.360861 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:27.360886 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:27.360964 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:27.361055 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHPort
	I0318 13:40:27.361132 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:27.361189 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHKeyPath
	I0318 13:40:27.361245 1155127 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/cert-expiration-537883/id_rsa Username:docker}
	I0318 13:40:27.361303 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetSSHUsername
	I0318 13:40:27.361435 1155127 sshutil.go:53] new ssh client: &{IP:192.168.61.50 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/cert-expiration-537883/id_rsa Username:docker}
	I0318 13:40:27.454065 1155127 ssh_runner.go:195] Run: systemctl --version
	I0318 13:40:27.474678 1155127 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:40:27.637199 1155127 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:40:27.644318 1155127 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:40:27.644403 1155127 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:40:27.654662 1155127 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0318 13:40:27.654686 1155127 start.go:494] detecting cgroup driver to use...
	I0318 13:40:27.654756 1155127 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:40:27.672161 1155127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:40:27.687710 1155127 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:40:27.687750 1155127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:40:27.703324 1155127 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:40:27.717925 1155127 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:40:27.857767 1155127 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:40:27.998126 1155127 docker.go:233] disabling docker service ...
	I0318 13:40:27.998191 1155127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:40:28.015879 1155127 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:40:28.030268 1155127 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:40:28.173734 1155127 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:40:28.309650 1155127 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:40:28.325076 1155127 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:40:28.346848 1155127 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:40:28.346903 1155127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:40:28.359477 1155127 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:40:28.359563 1155127 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:40:28.370999 1155127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:40:28.382400 1155127 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:40:28.393822 1155127 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:40:28.406145 1155127 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:40:28.417510 1155127 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:40:28.427395 1155127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:40:28.577169 1155127 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:40:28.837839 1155127 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:40:28.837912 1155127 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:40:28.844043 1155127 start.go:562] Will wait 60s for crictl version
	I0318 13:40:28.844099 1155127 ssh_runner.go:195] Run: which crictl
	I0318 13:40:28.848688 1155127 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:40:28.897551 1155127 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:40:28.897624 1155127 ssh_runner.go:195] Run: crio --version
	I0318 13:40:28.928034 1155127 ssh_runner.go:195] Run: crio --version
	I0318 13:40:28.965524 1155127 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:40:26.532932 1153618 addons.go:505] duration metric: took 5.963699ms for enable addons: enabled=[]
	I0318 13:40:26.714142 1153618 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:40:26.732839 1153618 node_ready.go:35] waiting up to 6m0s for node "pause-760389" to be "Ready" ...
	I0318 13:40:26.736263 1153618 node_ready.go:49] node "pause-760389" has status "Ready":"True"
	I0318 13:40:26.736294 1153618 node_ready.go:38] duration metric: took 3.420285ms for node "pause-760389" to be "Ready" ...
	I0318 13:40:26.736307 1153618 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:40:26.742177 1153618 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tbmwc" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.858111 1153618 pod_ready.go:92] pod "coredns-5dd5756b68-tbmwc" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:26.858135 1153618 pod_ready.go:81] duration metric: took 115.926782ms for pod "coredns-5dd5756b68-tbmwc" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:26.858146 1153618 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:27.261785 1153618 pod_ready.go:92] pod "etcd-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:27.261807 1153618 pod_ready.go:81] duration metric: took 403.655683ms for pod "etcd-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:27.261818 1153618 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:27.657515 1153618 pod_ready.go:92] pod "kube-apiserver-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:27.657542 1153618 pod_ready.go:81] duration metric: took 395.717538ms for pod "kube-apiserver-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:27.657552 1153618 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.058402 1153618 pod_ready.go:92] pod "kube-controller-manager-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:28.058429 1153618 pod_ready.go:81] duration metric: took 400.871292ms for pod "kube-controller-manager-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.058439 1153618 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mmxmq" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.457093 1153618 pod_ready.go:92] pod "kube-proxy-mmxmq" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:28.457123 1153618 pod_ready.go:81] duration metric: took 398.672285ms for pod "kube-proxy-mmxmq" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.457132 1153618 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.858291 1153618 pod_ready.go:92] pod "kube-scheduler-pause-760389" in "kube-system" namespace has status "Ready":"True"
	I0318 13:40:28.858327 1153618 pod_ready.go:81] duration metric: took 401.187387ms for pod "kube-scheduler-pause-760389" in "kube-system" namespace to be "Ready" ...
	I0318 13:40:28.858341 1153618 pod_ready.go:38] duration metric: took 2.122020305s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:40:28.858361 1153618 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:40:28.858433 1153618 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:40:28.876126 1153618 api_server.go:72] duration metric: took 2.349177499s to wait for apiserver process to appear ...
	I0318 13:40:28.876150 1153618 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:40:28.876168 1153618 api_server.go:253] Checking apiserver healthz at https://192.168.50.203:8443/healthz ...
	I0318 13:40:28.881101 1153618 api_server.go:279] https://192.168.50.203:8443/healthz returned 200:
	ok
	I0318 13:40:28.882317 1153618 api_server.go:141] control plane version: v1.28.4
	I0318 13:40:28.882347 1153618 api_server.go:131] duration metric: took 6.189206ms to wait for apiserver health ...
	I0318 13:40:28.882357 1153618 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:40:29.062791 1153618 system_pods.go:59] 6 kube-system pods found
	I0318 13:40:29.062830 1153618 system_pods.go:61] "coredns-5dd5756b68-tbmwc" [9f39aebe-7698-4aeb-9f8e-773dfe8d01ae] Running
	I0318 13:40:29.062836 1153618 system_pods.go:61] "etcd-pause-760389" [fb1c2278-e9f1-44c3-85e4-3e8cf62b63f0] Running
	I0318 13:40:29.062840 1153618 system_pods.go:61] "kube-apiserver-pause-760389" [cc7cfded-8931-4dab-a5e3-844cf05c4fb5] Running
	I0318 13:40:29.062845 1153618 system_pods.go:61] "kube-controller-manager-pause-760389" [30773a27-56bf-4d4a-829f-474f0f992d8c] Running
	I0318 13:40:29.062850 1153618 system_pods.go:61] "kube-proxy-mmxmq" [ab219bf0-9e1d-4170-ae1d-0c19aee8d50a] Running
	I0318 13:40:29.062854 1153618 system_pods.go:61] "kube-scheduler-pause-760389" [9fa80081-5981-47b8-9d70-8363fdb2e37c] Running
	I0318 13:40:29.062863 1153618 system_pods.go:74] duration metric: took 180.497081ms to wait for pod list to return data ...
	I0318 13:40:29.062872 1153618 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:40:29.257864 1153618 default_sa.go:45] found service account: "default"
	I0318 13:40:29.257892 1153618 default_sa.go:55] duration metric: took 195.007886ms for default service account to be created ...
	I0318 13:40:29.257902 1153618 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:40:29.461491 1153618 system_pods.go:86] 6 kube-system pods found
	I0318 13:40:29.461532 1153618 system_pods.go:89] "coredns-5dd5756b68-tbmwc" [9f39aebe-7698-4aeb-9f8e-773dfe8d01ae] Running
	I0318 13:40:29.461539 1153618 system_pods.go:89] "etcd-pause-760389" [fb1c2278-e9f1-44c3-85e4-3e8cf62b63f0] Running
	I0318 13:40:29.461545 1153618 system_pods.go:89] "kube-apiserver-pause-760389" [cc7cfded-8931-4dab-a5e3-844cf05c4fb5] Running
	I0318 13:40:29.461552 1153618 system_pods.go:89] "kube-controller-manager-pause-760389" [30773a27-56bf-4d4a-829f-474f0f992d8c] Running
	I0318 13:40:29.461558 1153618 system_pods.go:89] "kube-proxy-mmxmq" [ab219bf0-9e1d-4170-ae1d-0c19aee8d50a] Running
	I0318 13:40:29.461564 1153618 system_pods.go:89] "kube-scheduler-pause-760389" [9fa80081-5981-47b8-9d70-8363fdb2e37c] Running
	I0318 13:40:29.461574 1153618 system_pods.go:126] duration metric: took 203.665403ms to wait for k8s-apps to be running ...
	I0318 13:40:29.461583 1153618 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:40:29.461637 1153618 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:40:29.484045 1153618 system_svc.go:56] duration metric: took 22.449563ms WaitForService to wait for kubelet
	I0318 13:40:29.484080 1153618 kubeadm.go:576] duration metric: took 2.957135182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:40:29.484100 1153618 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:40:29.660902 1153618 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:40:29.660931 1153618 node_conditions.go:123] node cpu capacity is 2
	I0318 13:40:29.660976 1153618 node_conditions.go:105] duration metric: took 176.86973ms to run NodePressure ...
	I0318 13:40:29.660992 1153618 start.go:240] waiting for startup goroutines ...
	I0318 13:40:29.661004 1153618 start.go:245] waiting for cluster config update ...
	I0318 13:40:29.661015 1153618 start.go:254] writing updated cluster config ...
	I0318 13:40:29.661408 1153618 ssh_runner.go:195] Run: rm -f paused
	I0318 13:40:29.726956 1153618 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:40:29.728705 1153618 out.go:177] * Done! kubectl is now configured to use "pause-760389" cluster and "default" namespace by default
	I0318 13:40:28.966780 1155127 main.go:141] libmachine: (cert-expiration-537883) Calling .GetIP
	I0318 13:40:28.969406 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:28.969802 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:10:e1", ip: ""} in network mk-cert-expiration-537883: {Iface:virbr3 ExpiryTime:2024-03-18 14:36:49 +0000 UTC Type:0 Mac:52:54:00:f1:10:e1 Iaid: IPaddr:192.168.61.50 Prefix:24 Hostname:cert-expiration-537883 Clientid:01:52:54:00:f1:10:e1}
	I0318 13:40:28.969825 1155127 main.go:141] libmachine: (cert-expiration-537883) DBG | domain cert-expiration-537883 has defined IP address 192.168.61.50 and MAC address 52:54:00:f1:10:e1 in network mk-cert-expiration-537883
	I0318 13:40:28.970120 1155127 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 13:40:28.975248 1155127 kubeadm.go:877] updating cluster {Name:cert-expiration-537883 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:cert-expiration-537883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.50 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:40:28.975383 1155127 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:40:28.975442 1155127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:40:29.023850 1155127 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:40:29.023864 1155127 crio.go:415] Images already preloaded, skipping extraction
	I0318 13:40:29.023913 1155127 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:40:29.068489 1155127 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:40:29.068501 1155127 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:40:29.068507 1155127 kubeadm.go:928] updating node { 192.168.61.50 8443 v1.28.4 crio true true} ...
	I0318 13:40:29.068611 1155127 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=cert-expiration-537883 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.50
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:cert-expiration-537883 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:40:29.068670 1155127 ssh_runner.go:195] Run: crio config
	I0318 13:40:29.121966 1155127 cni.go:84] Creating CNI manager for ""
	I0318 13:40:29.121975 1155127 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:40:29.121983 1155127 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:40:29.122003 1155127 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.50 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-537883 NodeName:cert-expiration-537883 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.50"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.50 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:40:29.122126 1155127 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.50
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-537883"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.50
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.50"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:40:29.122184 1155127 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:40:29.134856 1155127 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:40:29.134933 1155127 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:40:29.146661 1155127 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0318 13:40:29.166417 1155127 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:40:29.185446 1155127 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2163 bytes)
	I0318 13:40:29.204129 1155127 ssh_runner.go:195] Run: grep 192.168.61.50	control-plane.minikube.internal$ /etc/hosts
	I0318 13:40:29.208801 1155127 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:40:29.440929 1155127 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:40:29.498506 1155127 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883 for IP: 192.168.61.50
	I0318 13:40:29.498519 1155127 certs.go:194] generating shared ca certs ...
	I0318 13:40:29.498545 1155127 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:40:29.498752 1155127 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:40:29.498839 1155127 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:40:29.498853 1155127 certs.go:256] generating profile certs ...
	W0318 13:40:29.499040 1155127 out.go:239] ! Certificate client.crt has expired. Generating a new one...
	I0318 13:40:29.499065 1155127 certs.go:624] cert expired /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/client.crt: expiration: 2024-03-18 13:40:05 +0000 UTC, now: 2024-03-18 13:40:29.499060095 +0000 UTC m=+9.008462272
	I0318 13:40:29.499234 1155127 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/client.key
	I0318 13:40:29.499261 1155127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/client.crt with IP's: []
	I0318 13:40:29.706877 1155127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/client.crt ...
	I0318 13:40:29.706894 1155127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/client.crt: {Name:mk494ba945752043ac40b64c60b7d8269905f7cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:40:29.707042 1155127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/client.key ...
	I0318 13:40:29.707050 1155127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/client.key: {Name:mk5557ca956a5bf3afcc0f9db50158a70babdf8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0318 13:40:29.707207 1155127 out.go:239] ! Certificate apiserver.crt.8759cef6 has expired. Generating a new one...
	I0318 13:40:29.707226 1155127 certs.go:624] cert expired /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.crt.8759cef6: expiration: 2024-03-18 13:40:05 +0000 UTC, now: 2024-03-18 13:40:29.707221437 +0000 UTC m=+9.216623610
	I0318 13:40:29.707298 1155127 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.key.8759cef6
	I0318 13:40:29.707315 1155127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.crt.8759cef6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.50]
	I0318 13:40:29.893121 1155127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.crt.8759cef6 ...
	I0318 13:40:29.893135 1155127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.crt.8759cef6: {Name:mk13e1289308e44128cd7ff5ada16b5a8cbe0048 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:40:29.893287 1155127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.key.8759cef6 ...
	I0318 13:40:29.893299 1155127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.key.8759cef6: {Name:mk838f30e0c932abc2bd50c50e4cea6be95fe6a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:40:29.893381 1155127 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.crt.8759cef6 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.crt
	I0318 13:40:29.893546 1155127 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.key.8759cef6 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.key
	W0318 13:40:29.893766 1155127 out.go:239] ! Certificate proxy-client.crt has expired. Generating a new one...
	I0318 13:40:29.893786 1155127 certs.go:624] cert expired /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/proxy-client.crt: expiration: 2024-03-18 13:40:05 +0000 UTC, now: 2024-03-18 13:40:29.893781158 +0000 UTC m=+9.403183351
	I0318 13:40:29.893881 1155127 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/proxy-client.key
	I0318 13:40:29.893900 1155127 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/proxy-client.crt with IP's: []
	I0318 13:40:30.055284 1155127 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/proxy-client.crt ...
	I0318 13:40:30.055305 1155127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/proxy-client.crt: {Name:mk5a83e0518f21eaa2b0ac1ca87913ea335a2cd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:40:30.055509 1155127 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/proxy-client.key ...
	I0318 13:40:30.055523 1155127 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/proxy-client.key: {Name:mk29b9a4db90ef7ff20b74280b376e19279c240f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:40:30.055826 1155127 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:40:30.055880 1155127 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:40:30.055890 1155127 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:40:30.055920 1155127 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:40:30.055947 1155127 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:40:30.055979 1155127 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:40:30.056046 1155127 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:40:30.057095 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:40:30.181193 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:40:30.232154 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:40:30.299864 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:40:30.384791 1155127 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/cert-expiration-537883/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	
	
	==> CRI-O <==
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.775654687Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769234775624581,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0b48896f-f2e9-439a-9326-3bc51f6a8284 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.776702022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5666c57f-482c-4123-b205-3280bd074827 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.776782713Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5666c57f-482c-4123-b205-3280bd074827 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.777125008Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82023a44680803f047157ce3e4f1b957c1ca0751b9255fda287895acca79da8c,PodSandboxId:d04640f12d10f0544ac4b0942f57fae05c233ec7aa7e3b19ec149a56d670f63b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769213891228191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca755dd1fa877e884a91d6c55b87dea3604a7d99e75514e0d321787f8747b91,PodSandboxId:8e0c7401f6c4a06b88b7e26f1af6badc3d286618645ab003170fd985e391b4a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769205754298318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b538eddcae3c73f8df74cb5b6d71fa942284bf7a1b06e6e1caf227320a5294f,PodSandboxId:c3e19712de0a2ed942b35ca7f9005ae2a0a3a17540f4724975c1e2cff7ae4497,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769205643898462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c20a4b37b8ddd5bd0e3233066f107191a05e768d2013944e90941adc3a2fc9b,PodSandboxId:78dd7aeeebbe11f851887d0944d751f89a4fe3b813fa89173cd3b5b5dbda028b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769205616858531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e4b0fde809b998f429bee8eb273aacae763c87a305f8e8e54d77d45611a31,PodSandboxId:f58821f212ea0c5fff18f1100e49b02f1b4c4e099009c709dd52749dc109c76c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769205554273432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82eec7c717ff71ffc3b406f1dd6a3068f8a6e4bc1455efd575cb56b627569aed,PodSandboxId:f0dd82a259e80e8cc4a225c8209fe398ab019909ebbe3c9fe10274abf34d944d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769205476051183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a,PodSandboxId:955db720cf0573717aa9b2b9ae7d26a5bea75fcee0593653b481ecb867699469,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769109642475749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8d
d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b9dc6324d7159773436a0616d877e6adac59f647e108f2ba4248c7807af7ef,PodSandboxId:b9a115975fa4679bb1d6d63a2a4df225756ede3aaecb2077c14422c71baa84ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769110404658084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02095178fc24b970bb1be66ba74a0f5bb027d2f88fcd2d091dd6628a075850ab,PodSandboxId:3c52e0be9c18f5b0ddca5baa4dc04772a74e999e6737bdf9d452320e4e6e1904,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769109317594171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annotations:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e15f7167231f05b0facf70970fc16630b20767e97cd2c4e95ce3855eec2904,PodSandboxId:ba951eb50329fb6026c3fbe266aa6eea81e8f8f4a43d2131fdbe539e4a36f832,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769109331297537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ceb76089cf49a3751e5ef3fa1430c9c0a22d1e5f51f63b2ff706c8ef7963629,PodSandboxId:5d97cac33c75cb8d164fea91aeb7447c001a7809d0a913c5c65eef555c75ec42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769109479194163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4d5fb2691d7b313d5ded78fed5ae997dae1078685f3104974ab9b91a27c351,PodSandboxId:4fe6bc9a0a41a75866c3535a3ab803495eb96474d8060ff8deff2b23f6858294,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769109174910127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5666c57f-482c-4123-b205-3280bd074827 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.830000042Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=af4c1639-0ee6-4bc3-b947-d27847484c28 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.830133398Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=af4c1639-0ee6-4bc3-b947-d27847484c28 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.831204953Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa3f7127-4d1a-4d4d-a495-9f29f37b398d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.831670306Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769234831648097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa3f7127-4d1a-4d4d-a495-9f29f37b398d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.832126344Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e16500a7-31fc-49f0-83b8-15a04f1a4ae0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.832209457Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e16500a7-31fc-49f0-83b8-15a04f1a4ae0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.832737328Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82023a44680803f047157ce3e4f1b957c1ca0751b9255fda287895acca79da8c,PodSandboxId:d04640f12d10f0544ac4b0942f57fae05c233ec7aa7e3b19ec149a56d670f63b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769213891228191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca755dd1fa877e884a91d6c55b87dea3604a7d99e75514e0d321787f8747b91,PodSandboxId:8e0c7401f6c4a06b88b7e26f1af6badc3d286618645ab003170fd985e391b4a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769205754298318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b538eddcae3c73f8df74cb5b6d71fa942284bf7a1b06e6e1caf227320a5294f,PodSandboxId:c3e19712de0a2ed942b35ca7f9005ae2a0a3a17540f4724975c1e2cff7ae4497,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769205643898462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c20a4b37b8ddd5bd0e3233066f107191a05e768d2013944e90941adc3a2fc9b,PodSandboxId:78dd7aeeebbe11f851887d0944d751f89a4fe3b813fa89173cd3b5b5dbda028b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769205616858531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e4b0fde809b998f429bee8eb273aacae763c87a305f8e8e54d77d45611a31,PodSandboxId:f58821f212ea0c5fff18f1100e49b02f1b4c4e099009c709dd52749dc109c76c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769205554273432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82eec7c717ff71ffc3b406f1dd6a3068f8a6e4bc1455efd575cb56b627569aed,PodSandboxId:f0dd82a259e80e8cc4a225c8209fe398ab019909ebbe3c9fe10274abf34d944d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769205476051183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a,PodSandboxId:955db720cf0573717aa9b2b9ae7d26a5bea75fcee0593653b481ecb867699469,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769109642475749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8d
d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b9dc6324d7159773436a0616d877e6adac59f647e108f2ba4248c7807af7ef,PodSandboxId:b9a115975fa4679bb1d6d63a2a4df225756ede3aaecb2077c14422c71baa84ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769110404658084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02095178fc24b970bb1be66ba74a0f5bb027d2f88fcd2d091dd6628a075850ab,PodSandboxId:3c52e0be9c18f5b0ddca5baa4dc04772a74e999e6737bdf9d452320e4e6e1904,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769109317594171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annotations:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e15f7167231f05b0facf70970fc16630b20767e97cd2c4e95ce3855eec2904,PodSandboxId:ba951eb50329fb6026c3fbe266aa6eea81e8f8f4a43d2131fdbe539e4a36f832,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769109331297537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ceb76089cf49a3751e5ef3fa1430c9c0a22d1e5f51f63b2ff706c8ef7963629,PodSandboxId:5d97cac33c75cb8d164fea91aeb7447c001a7809d0a913c5c65eef555c75ec42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769109479194163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4d5fb2691d7b313d5ded78fed5ae997dae1078685f3104974ab9b91a27c351,PodSandboxId:4fe6bc9a0a41a75866c3535a3ab803495eb96474d8060ff8deff2b23f6858294,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769109174910127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e16500a7-31fc-49f0-83b8-15a04f1a4ae0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.886662141Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71344543-6de0-4fbf-9c59-96adf2a3bd52 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.886736021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71344543-6de0-4fbf-9c59-96adf2a3bd52 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.888268168Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=29b548c3-1933-4cb7-87db-67f983e36d17 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.888869725Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769234888827325,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=29b548c3-1933-4cb7-87db-67f983e36d17 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.889394942Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=32bd88e2-4fd0-4a34-9864-0577f290720a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.889545415Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=32bd88e2-4fd0-4a34-9864-0577f290720a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.889806173Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82023a44680803f047157ce3e4f1b957c1ca0751b9255fda287895acca79da8c,PodSandboxId:d04640f12d10f0544ac4b0942f57fae05c233ec7aa7e3b19ec149a56d670f63b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769213891228191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca755dd1fa877e884a91d6c55b87dea3604a7d99e75514e0d321787f8747b91,PodSandboxId:8e0c7401f6c4a06b88b7e26f1af6badc3d286618645ab003170fd985e391b4a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769205754298318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b538eddcae3c73f8df74cb5b6d71fa942284bf7a1b06e6e1caf227320a5294f,PodSandboxId:c3e19712de0a2ed942b35ca7f9005ae2a0a3a17540f4724975c1e2cff7ae4497,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769205643898462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c20a4b37b8ddd5bd0e3233066f107191a05e768d2013944e90941adc3a2fc9b,PodSandboxId:78dd7aeeebbe11f851887d0944d751f89a4fe3b813fa89173cd3b5b5dbda028b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769205616858531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e4b0fde809b998f429bee8eb273aacae763c87a305f8e8e54d77d45611a31,PodSandboxId:f58821f212ea0c5fff18f1100e49b02f1b4c4e099009c709dd52749dc109c76c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769205554273432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82eec7c717ff71ffc3b406f1dd6a3068f8a6e4bc1455efd575cb56b627569aed,PodSandboxId:f0dd82a259e80e8cc4a225c8209fe398ab019909ebbe3c9fe10274abf34d944d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769205476051183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a,PodSandboxId:955db720cf0573717aa9b2b9ae7d26a5bea75fcee0593653b481ecb867699469,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769109642475749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8d
d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b9dc6324d7159773436a0616d877e6adac59f647e108f2ba4248c7807af7ef,PodSandboxId:b9a115975fa4679bb1d6d63a2a4df225756ede3aaecb2077c14422c71baa84ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769110404658084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02095178fc24b970bb1be66ba74a0f5bb027d2f88fcd2d091dd6628a075850ab,PodSandboxId:3c52e0be9c18f5b0ddca5baa4dc04772a74e999e6737bdf9d452320e4e6e1904,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769109317594171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annotations:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e15f7167231f05b0facf70970fc16630b20767e97cd2c4e95ce3855eec2904,PodSandboxId:ba951eb50329fb6026c3fbe266aa6eea81e8f8f4a43d2131fdbe539e4a36f832,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769109331297537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ceb76089cf49a3751e5ef3fa1430c9c0a22d1e5f51f63b2ff706c8ef7963629,PodSandboxId:5d97cac33c75cb8d164fea91aeb7447c001a7809d0a913c5c65eef555c75ec42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769109479194163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4d5fb2691d7b313d5ded78fed5ae997dae1078685f3104974ab9b91a27c351,PodSandboxId:4fe6bc9a0a41a75866c3535a3ab803495eb96474d8060ff8deff2b23f6858294,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769109174910127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=32bd88e2-4fd0-4a34-9864-0577f290720a name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.937349238Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0ac744a8-8832-45a3-8a35-583f08933eb4 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.937445376Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0ac744a8-8832-45a3-8a35-583f08933eb4 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.939115427Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e1efd03f-101e-4704-b39f-c1890679623b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.939589578Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710769234939477057,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e1efd03f-101e-4704-b39f-c1890679623b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.940144401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa75189c-74dd-43cf-845c-4cd68676cd8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.940201779Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa75189c-74dd-43cf-845c-4cd68676cd8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:40:34 pause-760389 crio[2648]: time="2024-03-18 13:40:34.940476380Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:82023a44680803f047157ce3e4f1b957c1ca0751b9255fda287895acca79da8c,PodSandboxId:d04640f12d10f0544ac4b0942f57fae05c233ec7aa7e3b19ec149a56d670f63b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710769213891228191,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8dd,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dca755dd1fa877e884a91d6c55b87dea3604a7d99e75514e0d321787f8747b91,PodSandboxId:8e0c7401f6c4a06b88b7e26f1af6badc3d286618645ab003170fd985e391b4a3,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710769205754298318,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",
\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9b538eddcae3c73f8df74cb5b6d71fa942284bf7a1b06e6e1caf227320a5294f,PodSandboxId:c3e19712de0a2ed942b35ca7f9005ae2a0a3a17540f4724975c1e2cff7ae4497,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710769205643898462,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annota
tions:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5c20a4b37b8ddd5bd0e3233066f107191a05e768d2013944e90941adc3a2fc9b,PodSandboxId:78dd7aeeebbe11f851887d0944d751f89a4fe3b813fa89173cd3b5b5dbda028b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710769205616858531,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c40e4b0fde809b998f429bee8eb273aacae763c87a305f8e8e54d77d45611a31,PodSandboxId:f58821f212ea0c5fff18f1100e49b02f1b4c4e099009c709dd52749dc109c76c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710769205554273432,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map
[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82eec7c717ff71ffc3b406f1dd6a3068f8a6e4bc1455efd575cb56b627569aed,PodSandboxId:f0dd82a259e80e8cc4a225c8209fe398ab019909ebbe3c9fe10274abf34d944d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710769205476051183,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.
kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a,PodSandboxId:955db720cf0573717aa9b2b9ae7d26a5bea75fcee0593653b481ecb867699469,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710769109642475749,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mmxmq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ab219bf0-9e1d-4170-ae1d-0c19aee8d50a,},Annotations:map[string]string{io.kubernetes.container.hash: bfebe8d
d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:65b9dc6324d7159773436a0616d877e6adac59f647e108f2ba4248c7807af7ef,PodSandboxId:b9a115975fa4679bb1d6d63a2a4df225756ede3aaecb2077c14422c71baa84ec,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710769110404658084,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tbmwc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f39aebe-7698-4aeb-9f8e-773dfe8d01ae,},Annotations:map[string]string{io.kubernetes.container.hash: 5ef385b,io.kubernetes.container.ports:
[{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:02095178fc24b970bb1be66ba74a0f5bb027d2f88fcd2d091dd6628a075850ab,PodSandboxId:3c52e0be9c18f5b0ddca5baa4dc04772a74e999e6737bdf9d452320e4e6e1904,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710769109317594171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-760389,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 12c5881c90dc32f884818aa1844fc13f,},Annotations:map[string]string{io.kubernetes.container.hash: 13a06cf6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2e15f7167231f05b0facf70970fc16630b20767e97cd2c4e95ce3855eec2904,PodSandboxId:ba951eb50329fb6026c3fbe266aa6eea81e8f8f4a43d2131fdbe539e4a36f832,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710769109331297537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-760389,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 30766ec601e69c24ee68a884fdb41d11,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6ceb76089cf49a3751e5ef3fa1430c9c0a22d1e5f51f63b2ff706c8ef7963629,PodSandboxId:5d97cac33c75cb8d164fea91aeb7447c001a7809d0a913c5c65eef555c75ec42,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710769109479194163,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-760389,io.kubernetes.pod.namespace: kube
-system,io.kubernetes.pod.uid: 50b52de4312105ea86e125bf42bf7a05,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0c4d5fb2691d7b313d5ded78fed5ae997dae1078685f3104974ab9b91a27c351,PodSandboxId:4fe6bc9a0a41a75866c3535a3ab803495eb96474d8060ff8deff2b23f6858294,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710769109174910127,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-760389,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: 9ca459955938361bb7b7557c4ac7dc7a,},Annotations:map[string]string{io.kubernetes.container.hash: 6cb36b7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa75189c-74dd-43cf-845c-4cd68676cd8c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	82023a4468080       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   21 seconds ago      Running             kube-proxy                2                   d04640f12d10f       kube-proxy-mmxmq
	dca755dd1fa87       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   29 seconds ago      Running             coredns                   2                   8e0c7401f6c4a       coredns-5dd5756b68-tbmwc
	9b538eddcae3c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   29 seconds ago      Running             etcd                      2                   c3e19712de0a2       etcd-pause-760389
	5c20a4b37b8dd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   29 seconds ago      Running             kube-controller-manager   2                   78dd7aeeebbe1       kube-controller-manager-pause-760389
	c40e4b0fde809       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   29 seconds ago      Running             kube-apiserver            2                   f58821f212ea0       kube-apiserver-pause-760389
	82eec7c717ff7       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   29 seconds ago      Running             kube-scheduler            2                   f0dd82a259e80       kube-scheduler-pause-760389
	65b9dc6324d71       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   2 minutes ago       Exited              coredns                   1                   b9a115975fa46       coredns-5dd5756b68-tbmwc
	039afcd9c06d2       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   2 minutes ago       Exited              kube-proxy                1                   955db720cf057       kube-proxy-mmxmq
	6ceb76089cf49       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   2 minutes ago       Exited              kube-controller-manager   1                   5d97cac33c75c       kube-controller-manager-pause-760389
	c2e15f7167231       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   2 minutes ago       Exited              kube-scheduler            1                   ba951eb50329f       kube-scheduler-pause-760389
	02095178fc24b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   2 minutes ago       Exited              etcd                      1                   3c52e0be9c18f       etcd-pause-760389
	0c4d5fb2691d7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   2 minutes ago       Exited              kube-apiserver            1                   4fe6bc9a0a41a       kube-apiserver-pause-760389
	
	
	==> coredns [65b9dc6324d7159773436a0616d877e6adac59f647e108f2ba4248c7807af7ef] <==
	
	
	==> coredns [dca755dd1fa877e884a91d6c55b87dea3604a7d99e75514e0d321787f8747b91] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:33356 - 54028 "HINFO IN 730236770335652900.5080579832071075224. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015469045s
	
	
	==> describe nodes <==
	Name:               pause-760389
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-760389
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=pause-760389
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_36_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:36:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-760389
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 13:40:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 13:40:10 +0000   Mon, 18 Mar 2024 13:36:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 13:40:10 +0000   Mon, 18 Mar 2024 13:36:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 13:40:10 +0000   Mon, 18 Mar 2024 13:36:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 13:40:10 +0000   Mon, 18 Mar 2024 13:36:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.203
	  Hostname:    pause-760389
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 55edccbfa758471ba01c4f0747714d5c
	  System UUID:                55edccbf-a758-471b-a01c-4f0747714d5c
	  Boot ID:                    ec7704d9-cea1-486b-a101-21f7a1c34545
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-tbmwc                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     3m29s
	  kube-system                 etcd-pause-760389                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-apiserver-pause-760389             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	  kube-system                 kube-controller-manager-pause-760389    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	  kube-system                 kube-proxy-mmxmq                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m29s
	  kube-system                 kube-scheduler-pause-760389             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3m26s              kube-proxy       
	  Normal  Starting                 21s                kube-proxy       
	  Normal  NodeAllocatableEnforced  3m43s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m43s              kubelet          Node pause-760389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s              kubelet          Node pause-760389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s              kubelet          Node pause-760389 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m43s              kubelet          Node pause-760389 status is now: NodeReady
	  Normal  Starting                 3m43s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           3m31s              node-controller  Node pause-760389 event: Registered Node pause-760389 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-760389 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-760389 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-760389 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13s                node-controller  Node pause-760389 event: Registered Node pause-760389 in Controller
	
	
	==> dmesg <==
	[  +0.070215] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.188819] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.140356] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.250056] systemd-fstab-generator[662]: Ignoring "noauto" option for root device
	[  +5.529106] systemd-fstab-generator[760]: Ignoring "noauto" option for root device
	[  +0.068786] kauditd_printk_skb: 130 callbacks suppressed
	[  +5.295231] systemd-fstab-generator[941]: Ignoring "noauto" option for root device
	[  +0.076491] kauditd_printk_skb: 18 callbacks suppressed
	[  +7.223765] systemd-fstab-generator[1274]: Ignoring "noauto" option for root device
	[  +0.083489] kauditd_printk_skb: 69 callbacks suppressed
	[Mar18 13:37] systemd-fstab-generator[1488]: Ignoring "noauto" option for root device
	[  +0.158355] kauditd_printk_skb: 21 callbacks suppressed
	[ +41.292400] kauditd_printk_skb: 61 callbacks suppressed
	[Mar18 13:38] systemd-fstab-generator[2280]: Ignoring "noauto" option for root device
	[  +0.327516] systemd-fstab-generator[2374]: Ignoring "noauto" option for root device
	[  +0.365809] systemd-fstab-generator[2456]: Ignoring "noauto" option for root device
	[  +0.224152] systemd-fstab-generator[2472]: Ignoring "noauto" option for root device
	[  +0.493346] systemd-fstab-generator[2564]: Ignoring "noauto" option for root device
	[Mar18 13:40] systemd-fstab-generator[2862]: Ignoring "noauto" option for root device
	[  +0.094242] kauditd_printk_skb: 169 callbacks suppressed
	[  +6.301065] systemd-fstab-generator[3408]: Ignoring "noauto" option for root device
	[  +0.086335] kauditd_printk_skb: 71 callbacks suppressed
	[  +5.059438] kauditd_printk_skb: 24 callbacks suppressed
	[  +8.454169] kauditd_printk_skb: 2 callbacks suppressed
	[  +4.291176] systemd-fstab-generator[3717]: Ignoring "noauto" option for root device
	
	
	==> etcd [02095178fc24b970bb1be66ba74a0f5bb027d2f88fcd2d091dd6628a075850ab] <==
	{"level":"warn","ts":"2024-03-18T13:38:30.833557Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-18T13:38:30.833642Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.50.203:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.50.203:2380","--initial-cluster=pause-760389=https://192.168.50.203:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.50.203:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.50.203:2380","--name=pause-760389","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trus
ted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-03-18T13:38:30.838224Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-03-18T13:38:30.838301Z","caller":"embed/config.go:673","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-18T13:38:30.838316Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.50.203:2380"]}
	{"level":"info","ts":"2024-03-18T13:38:30.838466Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T13:38:30.870647Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.203:2379"]}
	{"level":"info","ts":"2024-03-18T13:38:30.872698Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.9","git-sha":"bdbbde998","go-version":"go1.19.9","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"pause-760389","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.50.203:2380"],"listen-peer-urls":["https://192.168.50.203:2380"],"advertise-client-urls":["https://192.168.50.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-clus
ter-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-03-18T13:38:31.018392Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"93.408141ms"}
	
	
	==> etcd [9b538eddcae3c73f8df74cb5b6d71fa942284bf7a1b06e6e1caf227320a5294f] <==
	{"level":"info","ts":"2024-03-18T13:40:06.38846Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9b890156e43d782c","initial-advertise-peer-urls":["https://192.168.50.203:2380"],"listen-peer-urls":["https://192.168.50.203:2380"],"advertise-client-urls":["https://192.168.50.203:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.203:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T13:40:06.390693Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T13:40:06.390909Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.203:2380"}
	{"level":"info","ts":"2024-03-18T13:40:06.390942Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.203:2380"}
	{"level":"info","ts":"2024-03-18T13:40:06.391379Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:40:06.391441Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:40:06.391452Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:40:06.392436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c switched to configuration voters=(11207490620396238892)"}
	{"level":"info","ts":"2024-03-18T13:40:06.392692Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2f9dfc9eaa0376c8","local-member-id":"9b890156e43d782c","added-peer-id":"9b890156e43d782c","added-peer-peer-urls":["https://192.168.50.203:2380"]}
	{"level":"info","ts":"2024-03-18T13:40:06.393079Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2f9dfc9eaa0376c8","local-member-id":"9b890156e43d782c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:40:06.393203Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:40:07.472318Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-18T13:40:07.472362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:40:07.472377Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c received MsgPreVoteResp from 9b890156e43d782c at term 2"}
	{"level":"info","ts":"2024-03-18T13:40:07.472388Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c became candidate at term 3"}
	{"level":"info","ts":"2024-03-18T13:40:07.472394Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c received MsgVoteResp from 9b890156e43d782c at term 3"}
	{"level":"info","ts":"2024-03-18T13:40:07.472403Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9b890156e43d782c became leader at term 3"}
	{"level":"info","ts":"2024-03-18T13:40:07.47241Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9b890156e43d782c elected leader 9b890156e43d782c at term 3"}
	{"level":"info","ts":"2024-03-18T13:40:07.479948Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:40:07.481033Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.203:2379"}
	{"level":"info","ts":"2024-03-18T13:40:07.481317Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:40:07.484365Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:40:07.479884Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9b890156e43d782c","local-member-attributes":"{Name:pause-760389 ClientURLs:[https://192.168.50.203:2379]}","request-path":"/0/members/9b890156e43d782c/attributes","cluster-id":"2f9dfc9eaa0376c8","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:40:07.500601Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:40:07.500659Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 13:40:35 up 4 min,  0 users,  load average: 0.39, 0.31, 0.14
	Linux pause-760389 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0c4d5fb2691d7b313d5ded78fed5ae997dae1078685f3104974ab9b91a27c351] <==
	I0318 13:38:30.178132       1 options.go:220] external host was not specified, using 192.168.50.203
	I0318 13:38:30.179772       1 server.go:148] Version: v1.28.4
	I0318 13:38:30.179822       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [c40e4b0fde809b998f429bee8eb273aacae763c87a305f8e8e54d77d45611a31] <==
	I0318 13:40:09.833884       1 establishing_controller.go:76] Starting EstablishingController
	I0318 13:40:09.833918       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0318 13:40:09.833961       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0318 13:40:09.833978       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0318 13:40:09.984835       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0318 13:40:10.031704       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0318 13:40:10.031763       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0318 13:40:10.038159       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0318 13:40:10.038459       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0318 13:40:10.038580       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0318 13:40:10.038587       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0318 13:40:10.038712       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0318 13:40:10.039315       1 aggregator.go:166] initial CRD sync complete...
	I0318 13:40:10.039366       1 autoregister_controller.go:141] Starting autoregister controller
	I0318 13:40:10.039372       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0318 13:40:10.039378       1 cache.go:39] Caches are synced for autoregister controller
	I0318 13:40:10.039638       1 shared_informer.go:318] Caches are synced for configmaps
	I0318 13:40:10.838615       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0318 13:40:11.553940       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0318 13:40:11.568369       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0318 13:40:11.610218       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0318 13:40:11.643649       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0318 13:40:11.653085       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0318 13:40:22.263845       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0318 13:40:22.265072       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [5c20a4b37b8ddd5bd0e3233066f107191a05e768d2013944e90941adc3a2fc9b] <==
	I0318 13:40:22.246554       1 shared_informer.go:318] Caches are synced for endpoint
	I0318 13:40:22.249782       1 shared_informer.go:318] Caches are synced for job
	I0318 13:40:22.249900       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0318 13:40:22.251914       1 shared_informer.go:318] Caches are synced for GC
	I0318 13:40:22.251967       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0318 13:40:22.254650       1 shared_informer.go:318] Caches are synced for taint
	I0318 13:40:22.254777       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0318 13:40:22.254891       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-760389"
	I0318 13:40:22.254978       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0318 13:40:22.255023       1 taint_manager.go:205] "Starting NoExecuteTaintManager"
	I0318 13:40:22.255064       1 taint_manager.go:210] "Sending events to api server"
	I0318 13:40:22.255650       1 shared_informer.go:318] Caches are synced for deployment
	I0318 13:40:22.258570       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I0318 13:40:22.258739       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.713µs"
	I0318 13:40:22.258826       1 event.go:307] "Event occurred" object="pause-760389" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-760389 event: Registered Node pause-760389 in Controller"
	I0318 13:40:22.277294       1 shared_informer.go:318] Caches are synced for PVC protection
	I0318 13:40:22.341890       1 shared_informer.go:318] Caches are synced for attach detach
	I0318 13:40:22.364785       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:40:22.399950       1 shared_informer.go:318] Caches are synced for disruption
	I0318 13:40:22.405016       1 shared_informer.go:318] Caches are synced for stateful set
	I0318 13:40:22.410417       1 shared_informer.go:318] Caches are synced for ReplicationController
	I0318 13:40:22.426035       1 shared_informer.go:318] Caches are synced for resource quota
	I0318 13:40:22.797420       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:40:22.801888       1 shared_informer.go:318] Caches are synced for garbage collector
	I0318 13:40:22.801936       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	
	==> kube-controller-manager [6ceb76089cf49a3751e5ef3fa1430c9c0a22d1e5f51f63b2ff706c8ef7963629] <==
	
	
	==> kube-proxy [039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a] <==
	command /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a" failed with error: /bin/bash -c "sudo /usr/bin/crictl logs --tail 25 039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a": Process exited with status 1
	stdout:
	
	stderr:
	E0318 13:40:37.660468    4027 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="039afcd9c06d2426fc0a6a5aa0f478ea38b73cee18f2938ed963a9156c3f071a"
	time="2024-03-18T13:40:37Z" level=fatal msg="rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	
	
	==> kube-proxy [82023a44680803f047157ce3e4f1b957c1ca0751b9255fda287895acca79da8c] <==
	I0318 13:40:14.036438       1 server_others.go:69] "Using iptables proxy"
	I0318 13:40:14.048480       1 node.go:141] Successfully retrieved node IP: 192.168.50.203
	I0318 13:40:14.088254       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:40:14.088272       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:40:14.091060       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:40:14.091148       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:40:14.091377       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:40:14.091416       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:40:14.092404       1 config.go:188] "Starting service config controller"
	I0318 13:40:14.092464       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:40:14.092570       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:40:14.092626       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:40:14.093120       1 config.go:315] "Starting node config controller"
	I0318 13:40:14.093156       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:40:14.193743       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:40:14.193770       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:40:14.193791       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [82eec7c717ff71ffc3b406f1dd6a3068f8a6e4bc1455efd575cb56b627569aed] <==
	I0318 13:40:06.937758       1 serving.go:348] Generated self-signed cert in-memory
	W0318 13:40:09.943793       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0318 13:40:09.943851       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:40:09.943862       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0318 13:40:09.943868       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0318 13:40:09.990677       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0318 13:40:09.990726       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:40:09.994110       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0318 13:40:09.994322       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0318 13:40:09.994384       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0318 13:40:09.994411       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0318 13:40:10.095071       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c2e15f7167231f05b0facf70970fc16630b20767e97cd2c4e95ce3855eec2904] <==
	
	
	==> kubelet <==
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.058155    3415 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.065808    3415 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076428    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50b52de4312105ea86e125bf42bf7a05-kubeconfig\") pod \"kube-controller-manager-pause-760389\" (UID: \"50b52de4312105ea86e125bf42bf7a05\") " pod="kube-system/kube-controller-manager-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076540    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/30766ec601e69c24ee68a884fdb41d11-kubeconfig\") pod \"kube-scheduler-pause-760389\" (UID: \"30766ec601e69c24ee68a884fdb41d11\") " pod="kube-system/kube-scheduler-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076633    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9ca459955938361bb7b7557c4ac7dc7a-ca-certs\") pod \"kube-apiserver-pause-760389\" (UID: \"9ca459955938361bb7b7557c4ac7dc7a\") " pod="kube-system/kube-apiserver-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076683    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50b52de4312105ea86e125bf42bf7a05-ca-certs\") pod \"kube-controller-manager-pause-760389\" (UID: \"50b52de4312105ea86e125bf42bf7a05\") " pod="kube-system/kube-controller-manager-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076708    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50b52de4312105ea86e125bf42bf7a05-flexvolume-dir\") pod \"kube-controller-manager-pause-760389\" (UID: \"50b52de4312105ea86e125bf42bf7a05\") " pod="kube-system/kube-controller-manager-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076735    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/12c5881c90dc32f884818aa1844fc13f-etcd-data\") pod \"etcd-pause-760389\" (UID: \"12c5881c90dc32f884818aa1844fc13f\") " pod="kube-system/etcd-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076751    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9ca459955938361bb7b7557c4ac7dc7a-k8s-certs\") pod \"kube-apiserver-pause-760389\" (UID: \"9ca459955938361bb7b7557c4ac7dc7a\") " pod="kube-system/kube-apiserver-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076768    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50b52de4312105ea86e125bf42bf7a05-k8s-certs\") pod \"kube-controller-manager-pause-760389\" (UID: \"50b52de4312105ea86e125bf42bf7a05\") " pod="kube-system/kube-controller-manager-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076821    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50b52de4312105ea86e125bf42bf7a05-usr-share-ca-certificates\") pod \"kube-controller-manager-pause-760389\" (UID: \"50b52de4312105ea86e125bf42bf7a05\") " pod="kube-system/kube-controller-manager-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076840    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-lib-modules\") pod \"kube-proxy-mmxmq\" (UID: \"ab219bf0-9e1d-4170-ae1d-0c19aee8d50a\") " pod="kube-system/kube-proxy-mmxmq"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076859    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9ca459955938361bb7b7557c4ac7dc7a-usr-share-ca-certificates\") pod \"kube-apiserver-pause-760389\" (UID: \"9ca459955938361bb7b7557c4ac7dc7a\") " pod="kube-system/kube-apiserver-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076882    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/12c5881c90dc32f884818aa1844fc13f-etcd-certs\") pod \"etcd-pause-760389\" (UID: \"12c5881c90dc32f884818aa1844fc13f\") " pod="kube-system/etcd-pause-760389"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: I0318 13:40:10.076901    3415 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-xtables-lock\") pod \"kube-proxy-mmxmq\" (UID: \"ab219bf0-9e1d-4170-ae1d-0c19aee8d50a\") " pod="kube-system/kube-proxy-mmxmq"
	Mar 18 13:40:10 pause-760389 kubelet[3415]: E0318 13:40:10.077027    3415 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:10 pause-760389 kubelet[3415]: E0318 13:40:10.077224    3415 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy podName:ab219bf0-9e1d-4170-ae1d-0c19aee8d50a nodeName:}" failed. No retries permitted until 2024-03-18 13:40:10.577094547 +0000 UTC m=+1.762981734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy") pod "kube-proxy-mmxmq" (UID: "ab219bf0-9e1d-4170-ae1d-0c19aee8d50a") : object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:10 pause-760389 kubelet[3415]: E0318 13:40:10.580768    3415 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:10 pause-760389 kubelet[3415]: E0318 13:40:10.581017    3415 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy podName:ab219bf0-9e1d-4170-ae1d-0c19aee8d50a nodeName:}" failed. No retries permitted until 2024-03-18 13:40:11.580988707 +0000 UTC m=+2.766875881 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy") pod "kube-proxy-mmxmq" (UID: "ab219bf0-9e1d-4170-ae1d-0c19aee8d50a") : object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:11 pause-760389 kubelet[3415]: E0318 13:40:11.589070    3415 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:11 pause-760389 kubelet[3415]: E0318 13:40:11.589160    3415 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy podName:ab219bf0-9e1d-4170-ae1d-0c19aee8d50a nodeName:}" failed. No retries permitted until 2024-03-18 13:40:13.589138014 +0000 UTC m=+4.775025210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/ab219bf0-9e1d-4170-ae1d-0c19aee8d50a-kube-proxy") pod "kube-proxy-mmxmq" (UID: "ab219bf0-9e1d-4170-ae1d-0c19aee8d50a") : object "kube-system"/"kube-proxy" not registered
	Mar 18 13:40:13 pause-760389 kubelet[3415]: E0318 13:40:13.159003    3415 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f36e8110823805ddb71952396ef56fb131eec48da696e248a3a8899dbc28f18b\": container with ID starting with f36e8110823805ddb71952396ef56fb131eec48da696e248a3a8899dbc28f18b not found: ID does not exist" containerID="f36e8110823805ddb71952396ef56fb131eec48da696e248a3a8899dbc28f18b"
	Mar 18 13:40:13 pause-760389 kubelet[3415]: I0318 13:40:13.159251    3415 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f36e8110823805ddb71952396ef56fb131eec48da696e248a3a8899dbc28f18b"
	Mar 18 13:40:13 pause-760389 kubelet[3415]: I0318 13:40:13.159277    3415 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955db720cf0573717aa9b2b9ae7d26a5bea75fcee0593653b481ecb867699469"
	Mar 18 13:40:13 pause-760389 kubelet[3415]: I0318 13:40:13.159286    3415 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08d944f9cafd0ab742174c4bad1a2a4dac900f7d7e24e407184aac822e4ecb08"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-760389 -n pause-760389
helpers_test.go:261: (dbg) Run:  kubectl --context pause-760389 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (168.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (272.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-909137 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-909137 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m32.023280005s)

                                                
                                                
-- stdout --
	* [old-k8s-version-909137] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-909137" primary control-plane node in "old-k8s-version-909137" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:38:44.059533 1154224 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:38:44.059658 1154224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:38:44.059669 1154224 out.go:304] Setting ErrFile to fd 2...
	I0318 13:38:44.059674 1154224 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:38:44.059849 1154224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:38:44.060526 1154224 out.go:298] Setting JSON to false
	I0318 13:38:44.061596 1154224 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19271,"bootTime":1710749853,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:38:44.061665 1154224 start.go:139] virtualization: kvm guest
	I0318 13:38:44.063825 1154224 out.go:177] * [old-k8s-version-909137] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:38:44.065303 1154224 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:38:44.066580 1154224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:38:44.065340 1154224 notify.go:220] Checking for updates...
	I0318 13:38:44.069130 1154224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:38:44.070522 1154224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:38:44.071641 1154224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:38:44.072776 1154224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:38:44.074305 1154224 config.go:182] Loaded profile config "cert-expiration-537883": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:38:44.074430 1154224 config.go:182] Loaded profile config "kubernetes-upgrade-599578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:38:44.074600 1154224 config.go:182] Loaded profile config "pause-760389": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:38:44.074726 1154224 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:38:44.116708 1154224 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 13:38:44.118080 1154224 start.go:297] selected driver: kvm2
	I0318 13:38:44.118099 1154224 start.go:901] validating driver "kvm2" against <nil>
	I0318 13:38:44.118110 1154224 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:38:44.118820 1154224 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:38:44.118906 1154224 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:38:44.135243 1154224 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:38:44.135291 1154224 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 13:38:44.135512 1154224 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:38:44.135587 1154224 cni.go:84] Creating CNI manager for ""
	I0318 13:38:44.135610 1154224 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:38:44.135623 1154224 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 13:38:44.135710 1154224 start.go:340] cluster config:
	{Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:38:44.135855 1154224 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:38:44.138585 1154224 out.go:177] * Starting "old-k8s-version-909137" primary control-plane node in "old-k8s-version-909137" cluster
	I0318 13:38:44.139921 1154224 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:38:44.139977 1154224 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 13:38:44.139989 1154224 cache.go:56] Caching tarball of preloaded images
	I0318 13:38:44.140063 1154224 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:38:44.140076 1154224 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 13:38:44.140184 1154224 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json ...
	I0318 13:38:44.140208 1154224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json: {Name:mk778ed3e00301bfc3f00d260272d8c81e783af5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:38:44.140400 1154224 start.go:360] acquireMachinesLock for old-k8s-version-909137: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:38:44.140445 1154224 start.go:364] duration metric: took 26.354µs to acquireMachinesLock for "old-k8s-version-909137"
	I0318 13:38:44.140469 1154224 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:38:44.140546 1154224 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 13:38:44.142419 1154224 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 13:38:44.142559 1154224 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:38:44.142601 1154224 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:38:44.157225 1154224 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45127
	I0318 13:38:44.157683 1154224 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:38:44.158272 1154224 main.go:141] libmachine: Using API Version  1
	I0318 13:38:44.158320 1154224 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:38:44.158637 1154224 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:38:44.158891 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:38:44.159059 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:38:44.159249 1154224 start.go:159] libmachine.API.Create for "old-k8s-version-909137" (driver="kvm2")
	I0318 13:38:44.159280 1154224 client.go:168] LocalClient.Create starting
	I0318 13:38:44.159318 1154224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem
	I0318 13:38:44.159370 1154224 main.go:141] libmachine: Decoding PEM data...
	I0318 13:38:44.159391 1154224 main.go:141] libmachine: Parsing certificate...
	I0318 13:38:44.159468 1154224 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem
	I0318 13:38:44.159505 1154224 main.go:141] libmachine: Decoding PEM data...
	I0318 13:38:44.159522 1154224 main.go:141] libmachine: Parsing certificate...
	I0318 13:38:44.159557 1154224 main.go:141] libmachine: Running pre-create checks...
	I0318 13:38:44.159568 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .PreCreateCheck
	I0318 13:38:44.159992 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetConfigRaw
	I0318 13:38:44.160544 1154224 main.go:141] libmachine: Creating machine...
	I0318 13:38:44.160569 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .Create
	I0318 13:38:44.160726 1154224 main.go:141] libmachine: (old-k8s-version-909137) Creating KVM machine...
	I0318 13:38:44.161896 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found existing default KVM network
	I0318 13:38:44.163111 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:44.162966 1154246 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:e0:7b:bf} reservation:<nil>}
	I0318 13:38:44.164013 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:44.163917 1154246 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:43:1f:81} reservation:<nil>}
	I0318 13:38:44.165055 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:44.164953 1154246 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:6d:3c:f3} reservation:<nil>}
	I0318 13:38:44.166149 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:44.166056 1154246 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002f7170}
	I0318 13:38:44.166181 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | created network xml: 
	I0318 13:38:44.166197 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | <network>
	I0318 13:38:44.166211 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG |   <name>mk-old-k8s-version-909137</name>
	I0318 13:38:44.166219 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG |   <dns enable='no'/>
	I0318 13:38:44.166225 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG |   
	I0318 13:38:44.166237 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0318 13:38:44.166250 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG |     <dhcp>
	I0318 13:38:44.166263 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0318 13:38:44.166274 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG |     </dhcp>
	I0318 13:38:44.166288 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG |   </ip>
	I0318 13:38:44.166299 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG |   
	I0318 13:38:44.166312 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | </network>
	I0318 13:38:44.166318 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | 
	I0318 13:38:44.171680 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | trying to create private KVM network mk-old-k8s-version-909137 192.168.72.0/24...
	I0318 13:38:44.240741 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | private KVM network mk-old-k8s-version-909137 192.168.72.0/24 created
	I0318 13:38:44.240779 1154224 main.go:141] libmachine: (old-k8s-version-909137) Setting up store path in /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137 ...
	I0318 13:38:44.240803 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:44.240740 1154246 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:38:44.240823 1154224 main.go:141] libmachine: (old-k8s-version-909137) Building disk image from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 13:38:44.240902 1154224 main.go:141] libmachine: (old-k8s-version-909137) Downloading /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 13:38:44.490680 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:44.490559 1154246 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa...
	I0318 13:38:44.607447 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:44.607325 1154246 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/old-k8s-version-909137.rawdisk...
	I0318 13:38:44.607481 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Writing magic tar header
	I0318 13:38:44.607501 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Writing SSH key tar header
	I0318 13:38:44.607566 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:44.607495 1154246 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137 ...
	I0318 13:38:44.607677 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137
	I0318 13:38:44.607720 1154224 main.go:141] libmachine: (old-k8s-version-909137) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137 (perms=drwx------)
	I0318 13:38:44.607738 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines
	I0318 13:38:44.607757 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:38:44.607770 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816
	I0318 13:38:44.607796 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 13:38:44.607813 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Checking permissions on dir: /home/jenkins
	I0318 13:38:44.607828 1154224 main.go:141] libmachine: (old-k8s-version-909137) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines (perms=drwxr-xr-x)
	I0318 13:38:44.607843 1154224 main.go:141] libmachine: (old-k8s-version-909137) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube (perms=drwxr-xr-x)
	I0318 13:38:44.607855 1154224 main.go:141] libmachine: (old-k8s-version-909137) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816 (perms=drwxrwxr-x)
	I0318 13:38:44.607865 1154224 main.go:141] libmachine: (old-k8s-version-909137) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 13:38:44.607876 1154224 main.go:141] libmachine: (old-k8s-version-909137) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 13:38:44.607887 1154224 main.go:141] libmachine: (old-k8s-version-909137) Creating domain...
	I0318 13:38:44.607916 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Checking permissions on dir: /home
	I0318 13:38:44.607942 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Skipping /home - not owner
	I0318 13:38:44.609038 1154224 main.go:141] libmachine: (old-k8s-version-909137) define libvirt domain using xml: 
	I0318 13:38:44.609064 1154224 main.go:141] libmachine: (old-k8s-version-909137) <domain type='kvm'>
	I0318 13:38:44.609076 1154224 main.go:141] libmachine: (old-k8s-version-909137)   <name>old-k8s-version-909137</name>
	I0318 13:38:44.609083 1154224 main.go:141] libmachine: (old-k8s-version-909137)   <memory unit='MiB'>2200</memory>
	I0318 13:38:44.609091 1154224 main.go:141] libmachine: (old-k8s-version-909137)   <vcpu>2</vcpu>
	I0318 13:38:44.609098 1154224 main.go:141] libmachine: (old-k8s-version-909137)   <features>
	I0318 13:38:44.609118 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <acpi/>
	I0318 13:38:44.609125 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <apic/>
	I0318 13:38:44.609130 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <pae/>
	I0318 13:38:44.609137 1154224 main.go:141] libmachine: (old-k8s-version-909137)     
	I0318 13:38:44.609159 1154224 main.go:141] libmachine: (old-k8s-version-909137)   </features>
	I0318 13:38:44.609185 1154224 main.go:141] libmachine: (old-k8s-version-909137)   <cpu mode='host-passthrough'>
	I0318 13:38:44.609215 1154224 main.go:141] libmachine: (old-k8s-version-909137)   
	I0318 13:38:44.609240 1154224 main.go:141] libmachine: (old-k8s-version-909137)   </cpu>
	I0318 13:38:44.609250 1154224 main.go:141] libmachine: (old-k8s-version-909137)   <os>
	I0318 13:38:44.609262 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <type>hvm</type>
	I0318 13:38:44.609274 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <boot dev='cdrom'/>
	I0318 13:38:44.609288 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <boot dev='hd'/>
	I0318 13:38:44.609302 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <bootmenu enable='no'/>
	I0318 13:38:44.609330 1154224 main.go:141] libmachine: (old-k8s-version-909137)   </os>
	I0318 13:38:44.609342 1154224 main.go:141] libmachine: (old-k8s-version-909137)   <devices>
	I0318 13:38:44.609361 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <disk type='file' device='cdrom'>
	I0318 13:38:44.609379 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/boot2docker.iso'/>
	I0318 13:38:44.609399 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <target dev='hdc' bus='scsi'/>
	I0318 13:38:44.609411 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <readonly/>
	I0318 13:38:44.609421 1154224 main.go:141] libmachine: (old-k8s-version-909137)     </disk>
	I0318 13:38:44.609433 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <disk type='file' device='disk'>
	I0318 13:38:44.609446 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 13:38:44.609462 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/old-k8s-version-909137.rawdisk'/>
	I0318 13:38:44.609474 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <target dev='hda' bus='virtio'/>
	I0318 13:38:44.609497 1154224 main.go:141] libmachine: (old-k8s-version-909137)     </disk>
	I0318 13:38:44.609523 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <interface type='network'>
	I0318 13:38:44.609549 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <source network='mk-old-k8s-version-909137'/>
	I0318 13:38:44.609560 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <model type='virtio'/>
	I0318 13:38:44.609570 1154224 main.go:141] libmachine: (old-k8s-version-909137)     </interface>
	I0318 13:38:44.609580 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <interface type='network'>
	I0318 13:38:44.609589 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <source network='default'/>
	I0318 13:38:44.609599 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <model type='virtio'/>
	I0318 13:38:44.609621 1154224 main.go:141] libmachine: (old-k8s-version-909137)     </interface>
	I0318 13:38:44.609642 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <serial type='pty'>
	I0318 13:38:44.609653 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <target port='0'/>
	I0318 13:38:44.609664 1154224 main.go:141] libmachine: (old-k8s-version-909137)     </serial>
	I0318 13:38:44.609677 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <console type='pty'>
	I0318 13:38:44.609689 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <target type='serial' port='0'/>
	I0318 13:38:44.609714 1154224 main.go:141] libmachine: (old-k8s-version-909137)     </console>
	I0318 13:38:44.609725 1154224 main.go:141] libmachine: (old-k8s-version-909137)     <rng model='virtio'>
	I0318 13:38:44.609743 1154224 main.go:141] libmachine: (old-k8s-version-909137)       <backend model='random'>/dev/random</backend>
	I0318 13:38:44.609760 1154224 main.go:141] libmachine: (old-k8s-version-909137)     </rng>
	I0318 13:38:44.609768 1154224 main.go:141] libmachine: (old-k8s-version-909137)     
	I0318 13:38:44.609777 1154224 main.go:141] libmachine: (old-k8s-version-909137)     
	I0318 13:38:44.609786 1154224 main.go:141] libmachine: (old-k8s-version-909137)   </devices>
	I0318 13:38:44.609796 1154224 main.go:141] libmachine: (old-k8s-version-909137) </domain>
	I0318 13:38:44.609807 1154224 main.go:141] libmachine: (old-k8s-version-909137) 
	I0318 13:38:44.614102 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:33:9d:94 in network default
	I0318 13:38:44.614969 1154224 main.go:141] libmachine: (old-k8s-version-909137) Ensuring networks are active...
	I0318 13:38:44.614999 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:44.615813 1154224 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network default is active
	I0318 13:38:44.616149 1154224 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network mk-old-k8s-version-909137 is active
	I0318 13:38:44.616723 1154224 main.go:141] libmachine: (old-k8s-version-909137) Getting domain xml...
	I0318 13:38:44.617402 1154224 main.go:141] libmachine: (old-k8s-version-909137) Creating domain...
	I0318 13:38:45.868960 1154224 main.go:141] libmachine: (old-k8s-version-909137) Waiting to get IP...
	I0318 13:38:45.869762 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:45.870317 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:45.870374 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:45.870299 1154246 retry.go:31] will retry after 290.822684ms: waiting for machine to come up
	I0318 13:38:46.162874 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:46.163462 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:46.163494 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:46.163425 1154246 retry.go:31] will retry after 375.060474ms: waiting for machine to come up
	I0318 13:38:46.539990 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:46.540655 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:46.540705 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:46.540602 1154246 retry.go:31] will retry after 429.060227ms: waiting for machine to come up
	I0318 13:38:46.971151 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:46.971658 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:46.971691 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:46.971604 1154246 retry.go:31] will retry after 453.902765ms: waiting for machine to come up
	I0318 13:38:47.427366 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:47.427883 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:47.427916 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:47.427816 1154246 retry.go:31] will retry after 654.68348ms: waiting for machine to come up
	I0318 13:38:48.084403 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:48.085097 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:48.085141 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:48.085033 1154246 retry.go:31] will retry after 684.802787ms: waiting for machine to come up
	I0318 13:38:48.772014 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:48.772528 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:48.772560 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:48.772461 1154246 retry.go:31] will retry after 818.732899ms: waiting for machine to come up
	I0318 13:38:49.593122 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:49.593715 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:49.593745 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:49.593662 1154246 retry.go:31] will retry after 1.294798056s: waiting for machine to come up
	I0318 13:38:50.890541 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:50.890994 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:50.891070 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:50.890965 1154246 retry.go:31] will retry after 1.271577524s: waiting for machine to come up
	I0318 13:38:52.164620 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:52.165180 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:52.165212 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:52.165114 1154246 retry.go:31] will retry after 1.447227556s: waiting for machine to come up
	I0318 13:38:53.614195 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:53.614827 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:53.614861 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:53.614771 1154246 retry.go:31] will retry after 2.765609306s: waiting for machine to come up
	I0318 13:38:56.381666 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:56.382289 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:56.382318 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:56.382225 1154246 retry.go:31] will retry after 2.487143994s: waiting for machine to come up
	I0318 13:38:58.870604 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:38:58.871167 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:38:58.871193 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:38:58.871111 1154246 retry.go:31] will retry after 3.090822224s: waiting for machine to come up
	I0318 13:39:01.964618 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:01.965115 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:39:01.965145 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:39:01.965068 1154246 retry.go:31] will retry after 4.345365249s: waiting for machine to come up
	I0318 13:39:06.312811 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.313323 1154224 main.go:141] libmachine: (old-k8s-version-909137) Found IP for machine: 192.168.72.135
	I0318 13:39:06.313354 1154224 main.go:141] libmachine: (old-k8s-version-909137) Reserving static IP address...
	I0318 13:39:06.313387 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has current primary IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.313713 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"} in network mk-old-k8s-version-909137
	I0318 13:39:06.389052 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Getting to WaitForSSH function...
	I0318 13:39:06.389082 1154224 main.go:141] libmachine: (old-k8s-version-909137) Reserved static IP address: 192.168.72.135
	I0318 13:39:06.389096 1154224 main.go:141] libmachine: (old-k8s-version-909137) Waiting for SSH to be available...
	I0318 13:39:06.391978 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.392460 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:minikube Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:06.392492 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.392717 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH client type: external
	I0318 13:39:06.392765 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa (-rw-------)
	I0318 13:39:06.392803 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:39:06.392826 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | About to run SSH command:
	I0318 13:39:06.392850 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | exit 0
	I0318 13:39:06.521490 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | SSH cmd err, output: <nil>: 
	I0318 13:39:06.521763 1154224 main.go:141] libmachine: (old-k8s-version-909137) KVM machine creation complete!
	I0318 13:39:06.522092 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetConfigRaw
	I0318 13:39:06.522790 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:39:06.523015 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:39:06.523232 1154224 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 13:39:06.523250 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetState
	I0318 13:39:06.524582 1154224 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 13:39:06.524599 1154224 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 13:39:06.524606 1154224 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 13:39:06.524633 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:39:06.527124 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.527592 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:06.527623 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.527753 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:39:06.527973 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:06.528170 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:06.528353 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:39:06.528538 1154224 main.go:141] libmachine: Using SSH client type: native
	I0318 13:39:06.528777 1154224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:39:06.528791 1154224 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 13:39:06.643920 1154224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:39:06.643945 1154224 main.go:141] libmachine: Detecting the provisioner...
	I0318 13:39:06.643956 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:39:06.647072 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.647418 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:06.647449 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.647578 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:39:06.647773 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:06.647941 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:06.648083 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:39:06.648247 1154224 main.go:141] libmachine: Using SSH client type: native
	I0318 13:39:06.648532 1154224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:39:06.648549 1154224 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 13:39:06.757825 1154224 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 13:39:06.757897 1154224 main.go:141] libmachine: found compatible host: buildroot
	I0318 13:39:06.757907 1154224 main.go:141] libmachine: Provisioning with buildroot...
	I0318 13:39:06.757918 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:39:06.758178 1154224 buildroot.go:166] provisioning hostname "old-k8s-version-909137"
	I0318 13:39:06.758203 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:39:06.758392 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:39:06.760960 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.761334 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:06.761362 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.761522 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:39:06.761725 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:06.761901 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:06.762082 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:39:06.762286 1154224 main.go:141] libmachine: Using SSH client type: native
	I0318 13:39:06.762515 1154224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:39:06.762533 1154224 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-909137 && echo "old-k8s-version-909137" | sudo tee /etc/hostname
	I0318 13:39:06.889898 1154224 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-909137
	
	I0318 13:39:06.889931 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:39:06.892677 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.893102 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:06.893135 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:06.893281 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:39:06.893496 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:06.893694 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:06.893863 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:39:06.894086 1154224 main.go:141] libmachine: Using SSH client type: native
	I0318 13:39:06.894279 1154224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:39:06.894303 1154224 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-909137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-909137/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-909137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:39:07.011332 1154224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:39:07.011392 1154224 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:39:07.011418 1154224 buildroot.go:174] setting up certificates
	I0318 13:39:07.011428 1154224 provision.go:84] configureAuth start
	I0318 13:39:07.011438 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:39:07.011787 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:39:07.014699 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.015086 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:07.015110 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.015329 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:39:07.017721 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.018027 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:07.018057 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.018219 1154224 provision.go:143] copyHostCerts
	I0318 13:39:07.018311 1154224 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:39:07.018325 1154224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:39:07.018418 1154224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:39:07.018526 1154224 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:39:07.018537 1154224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:39:07.018580 1154224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:39:07.018657 1154224 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:39:07.018668 1154224 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:39:07.018702 1154224 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:39:07.018765 1154224 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-909137 san=[127.0.0.1 192.168.72.135 localhost minikube old-k8s-version-909137]
	I0318 13:39:07.295657 1154224 provision.go:177] copyRemoteCerts
	I0318 13:39:07.295727 1154224 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:39:07.295769 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:39:07.298439 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.298845 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:07.298882 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.299039 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:39:07.299259 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:07.299435 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:39:07.299618 1154224 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:39:07.386399 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:39:07.416513 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 13:39:07.447140 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:39:07.477556 1154224 provision.go:87] duration metric: took 466.114347ms to configureAuth
	I0318 13:39:07.477592 1154224 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:39:07.477830 1154224 config.go:182] Loaded profile config "old-k8s-version-909137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 13:39:07.477909 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:39:07.480535 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.480929 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:07.480984 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.481178 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:39:07.481395 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:07.481563 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:07.481744 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:39:07.481923 1154224 main.go:141] libmachine: Using SSH client type: native
	I0318 13:39:07.482096 1154224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:39:07.482111 1154224 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:39:07.779233 1154224 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:39:07.779294 1154224 main.go:141] libmachine: Checking connection to Docker...
	I0318 13:39:07.779309 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetURL
	I0318 13:39:07.780753 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using libvirt version 6000000
	I0318 13:39:07.783498 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.783919 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:07.783969 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.784127 1154224 main.go:141] libmachine: Docker is up and running!
	I0318 13:39:07.784146 1154224 main.go:141] libmachine: Reticulating splines...
	I0318 13:39:07.784155 1154224 client.go:171] duration metric: took 23.624862242s to LocalClient.Create
	I0318 13:39:07.784186 1154224 start.go:167] duration metric: took 23.624938007s to libmachine.API.Create "old-k8s-version-909137"
	I0318 13:39:07.784199 1154224 start.go:293] postStartSetup for "old-k8s-version-909137" (driver="kvm2")
	I0318 13:39:07.784215 1154224 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:39:07.784241 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:39:07.784511 1154224 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:39:07.784539 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:39:07.786621 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.786989 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:07.787019 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.787194 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:39:07.787387 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:07.787553 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:39:07.787698 1154224 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:39:07.872424 1154224 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:39:07.877848 1154224 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:39:07.877880 1154224 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:39:07.877939 1154224 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:39:07.878023 1154224 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:39:07.878134 1154224 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:39:07.889346 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:39:07.917197 1154224 start.go:296] duration metric: took 132.978085ms for postStartSetup
	I0318 13:39:07.917252 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetConfigRaw
	I0318 13:39:07.917883 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:39:07.920681 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.921137 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:07.921169 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.921550 1154224 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json ...
	I0318 13:39:07.921731 1154224 start.go:128] duration metric: took 23.78117398s to createHost
	I0318 13:39:07.921765 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:39:07.923982 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.924374 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:07.924415 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:07.924582 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:39:07.924794 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:07.924993 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:07.925184 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:39:07.925358 1154224 main.go:141] libmachine: Using SSH client type: native
	I0318 13:39:07.925537 1154224 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:39:07.925547 1154224 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 13:39:08.038095 1154224 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769148.017154927
	
	I0318 13:39:08.038126 1154224 fix.go:216] guest clock: 1710769148.017154927
	I0318 13:39:08.038136 1154224 fix.go:229] Guest: 2024-03-18 13:39:08.017154927 +0000 UTC Remote: 2024-03-18 13:39:07.921750471 +0000 UTC m=+23.912467331 (delta=95.404456ms)
	I0318 13:39:08.038165 1154224 fix.go:200] guest clock delta is within tolerance: 95.404456ms
	I0318 13:39:08.038174 1154224 start.go:83] releasing machines lock for "old-k8s-version-909137", held for 23.897719822s
	I0318 13:39:08.038203 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:39:08.038501 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:39:08.041474 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:08.041880 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:08.041910 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:08.042111 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:39:08.042711 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:39:08.042922 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:39:08.043048 1154224 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:39:08.043102 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:39:08.043184 1154224 ssh_runner.go:195] Run: cat /version.json
	I0318 13:39:08.043212 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:39:08.045941 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:08.046164 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:08.046327 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:08.046362 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:08.046589 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:39:08.046682 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:08.046728 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:08.046769 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:08.046929 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:39:08.047012 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:39:08.047120 1154224 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:39:08.047189 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:39:08.047329 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:39:08.047495 1154224 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:39:08.150721 1154224 ssh_runner.go:195] Run: systemctl --version
	I0318 13:39:08.159325 1154224 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:39:08.341670 1154224 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:39:08.349246 1154224 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:39:08.349322 1154224 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:39:08.372010 1154224 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:39:08.372034 1154224 start.go:494] detecting cgroup driver to use...
	I0318 13:39:08.372102 1154224 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:39:08.392380 1154224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:39:08.408165 1154224 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:39:08.408225 1154224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:39:08.423844 1154224 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:39:08.441499 1154224 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:39:08.570829 1154224 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:39:08.759583 1154224 docker.go:233] disabling docker service ...
	I0318 13:39:08.759663 1154224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:39:08.782110 1154224 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:39:08.802841 1154224 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:39:08.937365 1154224 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:39:09.087724 1154224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:39:09.116270 1154224 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:39:09.147098 1154224 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 13:39:09.147168 1154224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:39:09.161002 1154224 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:39:09.161082 1154224 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:39:09.173863 1154224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:39:09.186323 1154224 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:39:09.199334 1154224 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:39:09.212092 1154224 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:39:09.223452 1154224 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:39:09.223514 1154224 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:39:09.239226 1154224 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:39:09.251237 1154224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:39:09.390008 1154224 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:39:09.547922 1154224 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:39:09.548017 1154224 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:39:09.553613 1154224 start.go:562] Will wait 60s for crictl version
	I0318 13:39:09.553676 1154224 ssh_runner.go:195] Run: which crictl
	I0318 13:39:09.557964 1154224 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:39:09.598687 1154224 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:39:09.598796 1154224 ssh_runner.go:195] Run: crio --version
	I0318 13:39:09.635042 1154224 ssh_runner.go:195] Run: crio --version
	I0318 13:39:09.674048 1154224 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 13:39:09.675316 1154224 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:39:09.678485 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:09.678972 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:39:09.679001 1154224 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:39:09.679260 1154224 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 13:39:09.686483 1154224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:39:09.704230 1154224 kubeadm.go:877] updating cluster {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:39:09.704406 1154224 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:39:09.704473 1154224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:39:09.747990 1154224 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:39:09.748064 1154224 ssh_runner.go:195] Run: which lz4
	I0318 13:39:09.752676 1154224 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 13:39:09.757560 1154224 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:39:09.757595 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 13:39:11.803315 1154224 crio.go:444] duration metric: took 2.050671481s to copy over tarball
	I0318 13:39:11.803405 1154224 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:39:14.934208 1154224 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.130766341s)
	I0318 13:39:14.934268 1154224 crio.go:451] duration metric: took 3.13091664s to extract the tarball
	I0318 13:39:14.934279 1154224 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:39:14.980000 1154224 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:39:15.037162 1154224 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:39:15.037194 1154224 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:39:15.037298 1154224 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:39:15.037308 1154224 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 13:39:15.037326 1154224 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 13:39:15.037369 1154224 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:39:15.037404 1154224 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:39:15.037303 1154224 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:39:15.037291 1154224 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:39:15.037375 1154224 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:39:15.038974 1154224 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:39:15.039013 1154224 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:39:15.039055 1154224 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 13:39:15.039091 1154224 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:39:15.039129 1154224 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:39:15.039183 1154224 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:39:15.039234 1154224 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 13:39:15.039350 1154224 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:39:15.200725 1154224 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 13:39:15.202799 1154224 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:39:15.207100 1154224 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:39:15.217555 1154224 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 13:39:15.219743 1154224 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:39:15.319767 1154224 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 13:39:15.319827 1154224 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 13:39:15.319875 1154224 ssh_runner.go:195] Run: which crictl
	I0318 13:39:15.319875 1154224 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 13:39:15.319914 1154224 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:39:15.319960 1154224 ssh_runner.go:195] Run: which crictl
	I0318 13:39:15.330074 1154224 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 13:39:15.332390 1154224 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:39:15.357687 1154224 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 13:39:15.357737 1154224 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:39:15.357789 1154224 ssh_runner.go:195] Run: which crictl
	I0318 13:39:15.357799 1154224 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 13:39:15.357858 1154224 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 13:39:15.357908 1154224 ssh_runner.go:195] Run: which crictl
	I0318 13:39:15.383802 1154224 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 13:39:15.383850 1154224 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:39:15.383894 1154224 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 13:39:15.383932 1154224 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:39:15.383970 1154224 ssh_runner.go:195] Run: which crictl
	I0318 13:39:15.437274 1154224 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 13:39:15.437321 1154224 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:39:15.437374 1154224 ssh_runner.go:195] Run: which crictl
	I0318 13:39:15.454700 1154224 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 13:39:15.454757 1154224 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:39:15.454804 1154224 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:39:15.454819 1154224 ssh_runner.go:195] Run: which crictl
	I0318 13:39:15.454824 1154224 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 13:39:15.498132 1154224 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 13:39:15.498188 1154224 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:39:15.498221 1154224 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 13:39:15.498271 1154224 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 13:39:15.555502 1154224 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:39:15.555625 1154224 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 13:39:15.580846 1154224 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 13:39:15.593087 1154224 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 13:39:15.608883 1154224 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 13:39:15.627544 1154224 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 13:39:15.959790 1154224 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:39:16.103844 1154224 cache_images.go:92] duration metric: took 1.066629694s to LoadCachedImages
	W0318 13:39:16.103956 1154224 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0318 13:39:16.103975 1154224 kubeadm.go:928] updating node { 192.168.72.135 8443 v1.20.0 crio true true} ...
	I0318 13:39:16.104114 1154224 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-909137 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:39:16.104210 1154224 ssh_runner.go:195] Run: crio config
	I0318 13:39:16.159469 1154224 cni.go:84] Creating CNI manager for ""
	I0318 13:39:16.159489 1154224 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:39:16.159499 1154224 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:39:16.159523 1154224 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.135 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-909137 NodeName:old-k8s-version-909137 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 13:39:16.159701 1154224 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-909137"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:39:16.159786 1154224 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 13:39:16.171376 1154224 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:39:16.171470 1154224 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:39:16.182883 1154224 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 13:39:16.203177 1154224 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:39:16.223251 1154224 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 13:39:16.243146 1154224 ssh_runner.go:195] Run: grep 192.168.72.135	control-plane.minikube.internal$ /etc/hosts
	I0318 13:39:16.247581 1154224 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:39:16.263010 1154224 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:39:16.387688 1154224 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:39:16.406597 1154224 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137 for IP: 192.168.72.135
	I0318 13:39:16.406624 1154224 certs.go:194] generating shared ca certs ...
	I0318 13:39:16.406641 1154224 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:39:16.406836 1154224 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:39:16.406958 1154224 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:39:16.406997 1154224 certs.go:256] generating profile certs ...
	I0318 13:39:16.407074 1154224 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.key
	I0318 13:39:16.407092 1154224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.crt with IP's: []
	I0318 13:39:16.553221 1154224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.crt ...
	I0318 13:39:16.553254 1154224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.crt: {Name:mk4feb3fc7b51c387e2dbc404e902f0bd4659c9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:39:16.553454 1154224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.key ...
	I0318 13:39:16.553477 1154224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.key: {Name:mk9153c08e5c972f2782c39d4c5196a3ff57b199 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:39:16.553630 1154224 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key.e9806bd6
	I0318 13:39:16.553654 1154224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt.e9806bd6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.135]
	I0318 13:39:16.907950 1154224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt.e9806bd6 ...
	I0318 13:39:16.907982 1154224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt.e9806bd6: {Name:mkd5705a12cd6883852f7e10c0c7a4114e6f706d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:39:16.908136 1154224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key.e9806bd6 ...
	I0318 13:39:16.908151 1154224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key.e9806bd6: {Name:mkff0ace5797ea0596435b8fa2df49afad0c225a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:39:16.908223 1154224 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt.e9806bd6 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt
	I0318 13:39:16.908314 1154224 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key.e9806bd6 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key
	I0318 13:39:16.908421 1154224 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key
	I0318 13:39:16.908441 1154224 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.crt with IP's: []
	I0318 13:39:17.060442 1154224 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.crt ...
	I0318 13:39:17.060474 1154224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.crt: {Name:mkfe23ff7bbe7838bd0e39a173701079a2ac38a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:39:17.060645 1154224 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key ...
	I0318 13:39:17.060658 1154224 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key: {Name:mk667f986b9ba2592d7b9ff9633d6f400685eb35 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:39:17.060969 1154224 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:39:17.061016 1154224 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:39:17.061027 1154224 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:39:17.061046 1154224 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:39:17.061073 1154224 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:39:17.061100 1154224 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:39:17.061139 1154224 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:39:17.061787 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:39:17.097324 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:39:17.132554 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:39:17.174683 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:39:17.219590 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 13:39:17.259255 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:39:17.316641 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:39:17.356080 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:39:17.384141 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:39:17.412146 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:39:17.445413 1154224 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:39:17.472839 1154224 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:39:17.491686 1154224 ssh_runner.go:195] Run: openssl version
	I0318 13:39:17.498338 1154224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:39:17.510746 1154224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:39:17.516060 1154224 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:39:17.516122 1154224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:39:17.522206 1154224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:39:17.533874 1154224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:39:17.545550 1154224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:39:17.550986 1154224 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:39:17.551044 1154224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:39:17.557726 1154224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:39:17.569851 1154224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:39:17.582102 1154224 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:39:17.587695 1154224 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:39:17.587763 1154224 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:39:17.594296 1154224 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:39:17.606307 1154224 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:39:17.611284 1154224 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 13:39:17.611352 1154224 kubeadm.go:391] StartCluster: {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:39:17.611462 1154224 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:39:17.611512 1154224 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:39:17.658087 1154224 cri.go:89] found id: ""
	I0318 13:39:17.658183 1154224 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 13:39:17.669818 1154224 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:39:17.680543 1154224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:39:17.691190 1154224 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:39:17.691223 1154224 kubeadm.go:156] found existing configuration files:
	
	I0318 13:39:17.691274 1154224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:39:17.701080 1154224 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:39:17.701171 1154224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:39:17.711855 1154224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:39:17.725326 1154224 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:39:17.725408 1154224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:39:17.738401 1154224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:39:17.749035 1154224 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:39:17.749098 1154224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:39:17.759659 1154224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:39:17.770870 1154224 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:39:17.770937 1154224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:39:17.784772 1154224 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:39:18.079193 1154224 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:41:16.446724 1154224 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:41:16.446919 1154224 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 13:41:16.451885 1154224 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:41:16.451962 1154224 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:41:16.452074 1154224 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:41:16.452210 1154224 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:41:16.452403 1154224 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:41:16.452514 1154224 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:41:16.454397 1154224 out.go:204]   - Generating certificates and keys ...
	I0318 13:41:16.454498 1154224 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:41:16.454597 1154224 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:41:16.454711 1154224 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 13:41:16.454806 1154224 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 13:41:16.454894 1154224 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 13:41:16.454969 1154224 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 13:41:16.455063 1154224 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 13:41:16.455246 1154224 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-909137] and IPs [192.168.72.135 127.0.0.1 ::1]
	I0318 13:41:16.455343 1154224 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 13:41:16.455486 1154224 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-909137] and IPs [192.168.72.135 127.0.0.1 ::1]
	I0318 13:41:16.455567 1154224 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 13:41:16.455651 1154224 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 13:41:16.455714 1154224 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 13:41:16.455798 1154224 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:41:16.455880 1154224 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:41:16.455958 1154224 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:41:16.456050 1154224 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:41:16.456125 1154224 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:41:16.456272 1154224 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:41:16.456403 1154224 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:41:16.456464 1154224 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:41:16.456553 1154224 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:41:16.458285 1154224 out.go:204]   - Booting up control plane ...
	I0318 13:41:16.458390 1154224 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:41:16.458522 1154224 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:41:16.458627 1154224 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:41:16.458747 1154224 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:41:16.458968 1154224 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:41:16.459036 1154224 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:41:16.459135 1154224 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:41:16.459322 1154224 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:41:16.459413 1154224 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:41:16.459596 1154224 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:41:16.459685 1154224 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:41:16.459948 1154224 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:41:16.460049 1154224 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:41:16.460333 1154224 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:41:16.460433 1154224 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:41:16.460599 1154224 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:41:16.460610 1154224 kubeadm.go:309] 
	I0318 13:41:16.460642 1154224 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:41:16.460680 1154224 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:41:16.460689 1154224 kubeadm.go:309] 
	I0318 13:41:16.460718 1154224 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:41:16.460746 1154224 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:41:16.460831 1154224 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:41:16.460838 1154224 kubeadm.go:309] 
	I0318 13:41:16.460950 1154224 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:41:16.460983 1154224 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:41:16.461026 1154224 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:41:16.461034 1154224 kubeadm.go:309] 
	I0318 13:41:16.461177 1154224 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:41:16.461305 1154224 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:41:16.461317 1154224 kubeadm.go:309] 
	I0318 13:41:16.461461 1154224 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:41:16.461579 1154224 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:41:16.461705 1154224 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:41:16.461764 1154224 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:41:16.461820 1154224 kubeadm.go:309] 
	W0318 13:41:16.461902 1154224 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-909137] and IPs [192.168.72.135 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-909137] and IPs [192.168.72.135 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-909137] and IPs [192.168.72.135 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-909137] and IPs [192.168.72.135 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 13:41:16.461960 1154224 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:41:18.286233 1154224 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.824240904s)
	I0318 13:41:18.286341 1154224 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:41:18.302141 1154224 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:41:18.313519 1154224 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:41:18.313543 1154224 kubeadm.go:156] found existing configuration files:
	
	I0318 13:41:18.313599 1154224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:41:18.328742 1154224 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:41:18.328818 1154224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:41:18.340440 1154224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:41:18.350414 1154224 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:41:18.350495 1154224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:41:18.360532 1154224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:41:18.370863 1154224 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:41:18.370926 1154224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:41:18.382009 1154224 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:41:18.393030 1154224 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:41:18.393121 1154224 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:41:18.404057 1154224 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:41:18.487258 1154224 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:41:18.487426 1154224 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:41:18.671987 1154224 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:41:18.672151 1154224 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:41:18.672337 1154224 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:41:18.881853 1154224 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:41:18.883461 1154224 out.go:204]   - Generating certificates and keys ...
	I0318 13:41:18.883614 1154224 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:41:18.883742 1154224 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:41:18.883849 1154224 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:41:18.883946 1154224 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:41:18.884467 1154224 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:41:18.884653 1154224 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:41:18.885306 1154224 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:41:18.885677 1154224 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:41:18.886068 1154224 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:41:18.886684 1154224 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:41:18.886748 1154224 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:41:18.886834 1154224 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:41:19.154585 1154224 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:41:19.239083 1154224 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:41:19.641015 1154224 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:41:20.174969 1154224 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:41:20.194281 1154224 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:41:20.195752 1154224 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:41:20.195826 1154224 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:41:20.369502 1154224 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:41:20.371112 1154224 out.go:204]   - Booting up control plane ...
	I0318 13:41:20.371263 1154224 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:41:20.375121 1154224 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:41:20.376773 1154224 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:41:20.377917 1154224 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:41:20.389658 1154224 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:42:00.395224 1154224 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:42:00.395624 1154224 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:42:00.395861 1154224 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:42:05.396638 1154224 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:42:05.396871 1154224 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:42:15.397584 1154224 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:42:15.397863 1154224 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:42:35.398902 1154224 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:42:35.399143 1154224 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:43:15.398457 1154224 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:43:15.398674 1154224 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:43:15.398685 1154224 kubeadm.go:309] 
	I0318 13:43:15.398732 1154224 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:43:15.398785 1154224 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:43:15.398798 1154224 kubeadm.go:309] 
	I0318 13:43:15.398847 1154224 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:43:15.398884 1154224 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:43:15.398972 1154224 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:43:15.398980 1154224 kubeadm.go:309] 
	I0318 13:43:15.399073 1154224 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:43:15.399104 1154224 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:43:15.399131 1154224 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:43:15.399135 1154224 kubeadm.go:309] 
	I0318 13:43:15.399229 1154224 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:43:15.399298 1154224 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:43:15.399305 1154224 kubeadm.go:309] 
	I0318 13:43:15.399419 1154224 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:43:15.399494 1154224 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:43:15.399656 1154224 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:43:15.399807 1154224 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:43:15.399837 1154224 kubeadm.go:309] 
	I0318 13:43:15.401278 1154224 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:43:15.401387 1154224 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:43:15.401490 1154224 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 13:43:15.401605 1154224 kubeadm.go:393] duration metric: took 3m57.790259721s to StartCluster
	I0318 13:43:15.401663 1154224 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:43:15.401737 1154224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:43:15.450584 1154224 cri.go:89] found id: ""
	I0318 13:43:15.450620 1154224 logs.go:276] 0 containers: []
	W0318 13:43:15.450631 1154224 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:43:15.450640 1154224 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:43:15.450706 1154224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:43:15.490732 1154224 cri.go:89] found id: ""
	I0318 13:43:15.490763 1154224 logs.go:276] 0 containers: []
	W0318 13:43:15.490772 1154224 logs.go:278] No container was found matching "etcd"
	I0318 13:43:15.490780 1154224 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:43:15.490853 1154224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:43:15.531187 1154224 cri.go:89] found id: ""
	I0318 13:43:15.531211 1154224 logs.go:276] 0 containers: []
	W0318 13:43:15.531220 1154224 logs.go:278] No container was found matching "coredns"
	I0318 13:43:15.531227 1154224 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:43:15.531285 1154224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:43:15.570784 1154224 cri.go:89] found id: ""
	I0318 13:43:15.570823 1154224 logs.go:276] 0 containers: []
	W0318 13:43:15.570833 1154224 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:43:15.570840 1154224 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:43:15.570903 1154224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:43:15.606681 1154224 cri.go:89] found id: ""
	I0318 13:43:15.606710 1154224 logs.go:276] 0 containers: []
	W0318 13:43:15.606721 1154224 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:43:15.606730 1154224 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:43:15.606792 1154224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:43:15.649027 1154224 cri.go:89] found id: ""
	I0318 13:43:15.649052 1154224 logs.go:276] 0 containers: []
	W0318 13:43:15.649063 1154224 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:43:15.649072 1154224 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:43:15.649138 1154224 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:43:15.687065 1154224 cri.go:89] found id: ""
	I0318 13:43:15.687101 1154224 logs.go:276] 0 containers: []
	W0318 13:43:15.687114 1154224 logs.go:278] No container was found matching "kindnet"
	I0318 13:43:15.687128 1154224 logs.go:123] Gathering logs for kubelet ...
	I0318 13:43:15.687144 1154224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:43:15.743684 1154224 logs.go:123] Gathering logs for dmesg ...
	I0318 13:43:15.743726 1154224 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:43:15.758893 1154224 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:43:15.758928 1154224 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:43:15.875382 1154224 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:43:15.875411 1154224 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:43:15.875425 1154224 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:43:15.970033 1154224 logs.go:123] Gathering logs for container status ...
	I0318 13:43:15.970073 1154224 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0318 13:43:16.014636 1154224 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 13:43:16.014704 1154224 out.go:239] * 
	* 
	W0318 13:43:16.014825 1154224 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:43:16.014852 1154224 out.go:239] * 
	* 
	W0318 13:43:16.015703 1154224 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:43:16.018783 1154224 out.go:177] 
	W0318 13:43:16.019977 1154224 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:43:16.020029 1154224 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 13:43:16.020049 1154224 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 13:43:16.021556 1154224 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-909137 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 6 (247.403757ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:43:16.314995 1156796 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-909137" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-909137" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (272.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-173036 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-173036 --alsologtostderr -v=3: exit status 82 (2m0.625735159s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-173036"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:41:52.316222 1156320 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:41:52.316377 1156320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:52.316384 1156320 out.go:304] Setting ErrFile to fd 2...
	I0318 13:41:52.316389 1156320 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:41:52.316574 1156320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:41:52.316926 1156320 out.go:298] Setting JSON to false
	I0318 13:41:52.317038 1156320 mustload.go:65] Loading cluster: embed-certs-173036
	I0318 13:41:52.317406 1156320 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:41:52.317481 1156320 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/config.json ...
	I0318 13:41:52.317650 1156320 mustload.go:65] Loading cluster: embed-certs-173036
	I0318 13:41:52.317790 1156320 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:41:52.317866 1156320 stop.go:39] StopHost: embed-certs-173036
	I0318 13:41:52.318338 1156320 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:41:52.318391 1156320 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:41:52.338957 1156320 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44611
	I0318 13:41:52.339592 1156320 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:41:52.340353 1156320 main.go:141] libmachine: Using API Version  1
	I0318 13:41:52.340378 1156320 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:41:52.341254 1156320 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:41:52.345295 1156320 out.go:177] * Stopping node "embed-certs-173036"  ...
	I0318 13:41:52.347214 1156320 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 13:41:52.347256 1156320 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:41:52.347552 1156320 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 13:41:52.347574 1156320 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:41:52.351088 1156320 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:41:52.351574 1156320 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:40:55 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:41:52.351604 1156320 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:41:52.351743 1156320 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:41:52.351954 1156320 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:41:52.352096 1156320 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:41:52.352264 1156320 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:41:52.520533 1156320 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 13:41:52.589756 1156320 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 13:41:52.662057 1156320 main.go:141] libmachine: Stopping "embed-certs-173036"...
	I0318 13:41:52.662095 1156320 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:41:52.664001 1156320 main.go:141] libmachine: (embed-certs-173036) Calling .Stop
	I0318 13:41:52.668041 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 0/120
	I0318 13:41:53.669586 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 1/120
	I0318 13:41:54.671543 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 2/120
	I0318 13:41:55.673132 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 3/120
	I0318 13:41:56.674793 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 4/120
	I0318 13:41:57.676688 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 5/120
	I0318 13:41:58.678856 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 6/120
	I0318 13:41:59.680271 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 7/120
	I0318 13:42:00.681761 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 8/120
	I0318 13:42:01.683411 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 9/120
	I0318 13:42:02.685704 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 10/120
	I0318 13:42:03.687072 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 11/120
	I0318 13:42:04.688610 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 12/120
	I0318 13:42:05.690172 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 13/120
	I0318 13:42:06.691892 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 14/120
	I0318 13:42:07.693670 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 15/120
	I0318 13:42:08.695252 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 16/120
	I0318 13:42:09.697083 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 17/120
	I0318 13:42:10.698913 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 18/120
	I0318 13:42:11.700047 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 19/120
	I0318 13:42:12.702286 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 20/120
	I0318 13:42:13.703862 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 21/120
	I0318 13:42:14.705595 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 22/120
	I0318 13:42:15.707849 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 23/120
	I0318 13:42:16.709275 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 24/120
	I0318 13:42:17.710685 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 25/120
	I0318 13:42:18.712270 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 26/120
	I0318 13:42:19.713663 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 27/120
	I0318 13:42:20.715098 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 28/120
	I0318 13:42:21.716534 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 29/120
	I0318 13:42:22.718486 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 30/120
	I0318 13:42:23.719846 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 31/120
	I0318 13:42:24.721317 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 32/120
	I0318 13:42:25.722823 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 33/120
	I0318 13:42:26.724370 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 34/120
	I0318 13:42:27.726541 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 35/120
	I0318 13:42:28.727863 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 36/120
	I0318 13:42:29.729414 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 37/120
	I0318 13:42:30.730858 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 38/120
	I0318 13:42:31.732402 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 39/120
	I0318 13:42:32.734053 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 40/120
	I0318 13:42:33.735641 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 41/120
	I0318 13:42:34.737523 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 42/120
	I0318 13:42:35.738871 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 43/120
	I0318 13:42:36.740280 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 44/120
	I0318 13:42:37.742298 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 45/120
	I0318 13:42:38.743730 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 46/120
	I0318 13:42:39.745051 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 47/120
	I0318 13:42:40.746597 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 48/120
	I0318 13:42:41.747914 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 49/120
	I0318 13:42:42.750121 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 50/120
	I0318 13:42:43.751587 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 51/120
	I0318 13:42:44.753132 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 52/120
	I0318 13:42:45.754650 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 53/120
	I0318 13:42:46.756085 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 54/120
	I0318 13:42:47.758092 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 55/120
	I0318 13:42:48.759542 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 56/120
	I0318 13:42:49.760900 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 57/120
	I0318 13:42:50.763023 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 58/120
	I0318 13:42:51.764356 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 59/120
	I0318 13:42:52.766566 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 60/120
	I0318 13:42:53.768092 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 61/120
	I0318 13:42:54.769430 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 62/120
	I0318 13:42:55.770724 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 63/120
	I0318 13:42:56.772145 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 64/120
	I0318 13:42:57.773679 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 65/120
	I0318 13:42:58.776026 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 66/120
	I0318 13:42:59.777342 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 67/120
	I0318 13:43:00.779326 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 68/120
	I0318 13:43:01.780764 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 69/120
	I0318 13:43:02.783135 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 70/120
	I0318 13:43:03.784471 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 71/120
	I0318 13:43:04.786961 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 72/120
	I0318 13:43:05.788261 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 73/120
	I0318 13:43:06.789603 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 74/120
	I0318 13:43:07.791563 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 75/120
	I0318 13:43:08.792873 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 76/120
	I0318 13:43:09.794215 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 77/120
	I0318 13:43:10.795509 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 78/120
	I0318 13:43:11.796944 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 79/120
	I0318 13:43:12.798968 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 80/120
	I0318 13:43:13.800231 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 81/120
	I0318 13:43:14.802198 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 82/120
	I0318 13:43:15.803412 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 83/120
	I0318 13:43:16.805062 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 84/120
	I0318 13:43:17.806595 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 85/120
	I0318 13:43:18.808041 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 86/120
	I0318 13:43:19.809441 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 87/120
	I0318 13:43:20.810712 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 88/120
	I0318 13:43:21.812173 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 89/120
	I0318 13:43:22.814225 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 90/120
	I0318 13:43:23.815666 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 91/120
	I0318 13:43:24.817116 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 92/120
	I0318 13:43:25.818437 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 93/120
	I0318 13:43:26.819788 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 94/120
	I0318 13:43:27.821727 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 95/120
	I0318 13:43:28.823154 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 96/120
	I0318 13:43:29.824576 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 97/120
	I0318 13:43:30.825995 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 98/120
	I0318 13:43:31.827487 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 99/120
	I0318 13:43:32.829700 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 100/120
	I0318 13:43:33.831083 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 101/120
	I0318 13:43:34.832573 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 102/120
	I0318 13:43:35.833973 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 103/120
	I0318 13:43:36.835505 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 104/120
	I0318 13:43:37.837718 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 105/120
	I0318 13:43:38.839067 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 106/120
	I0318 13:43:39.840765 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 107/120
	I0318 13:43:40.842088 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 108/120
	I0318 13:43:41.843602 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 109/120
	I0318 13:43:42.846015 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 110/120
	I0318 13:43:43.847496 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 111/120
	I0318 13:43:44.849037 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 112/120
	I0318 13:43:45.850516 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 113/120
	I0318 13:43:46.852254 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 114/120
	I0318 13:43:47.854187 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 115/120
	I0318 13:43:48.855627 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 116/120
	I0318 13:43:49.857245 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 117/120
	I0318 13:43:50.858608 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 118/120
	I0318 13:43:51.860087 1156320 main.go:141] libmachine: (embed-certs-173036) Waiting for machine to stop 119/120
	I0318 13:43:52.861411 1156320 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 13:43:52.861502 1156320 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 13:43:52.864878 1156320 out.go:177] 
	W0318 13:43:52.866190 1156320 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 13:43:52.866210 1156320 out.go:239] * 
	* 
	W0318 13:43:52.870982 1156320 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:43:52.872439 1156320 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-173036 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036: exit status 3 (18.498400176s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:44:11.372643 1157044 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.191:22: connect: no route to host
	E0318 13:44:11.372666 1157044 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.191:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-173036" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-537236 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-537236 --alsologtostderr -v=3: exit status 82 (2m0.474770582s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-537236"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:42:08.613359 1156512 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:42:08.613884 1156512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:42:08.613935 1156512 out.go:304] Setting ErrFile to fd 2...
	I0318 13:42:08.613953 1156512 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:42:08.614416 1156512 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:42:08.615056 1156512 out.go:298] Setting JSON to false
	I0318 13:42:08.615156 1156512 mustload.go:65] Loading cluster: no-preload-537236
	I0318 13:42:08.615573 1156512 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:42:08.615651 1156512 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/config.json ...
	I0318 13:42:08.615833 1156512 mustload.go:65] Loading cluster: no-preload-537236
	I0318 13:42:08.615985 1156512 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:42:08.616026 1156512 stop.go:39] StopHost: no-preload-537236
	I0318 13:42:08.616433 1156512 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:42:08.616474 1156512 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:42:08.631120 1156512 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43197
	I0318 13:42:08.631606 1156512 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:42:08.632209 1156512 main.go:141] libmachine: Using API Version  1
	I0318 13:42:08.632233 1156512 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:42:08.632602 1156512 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:42:08.634905 1156512 out.go:177] * Stopping node "no-preload-537236"  ...
	I0318 13:42:08.636503 1156512 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 13:42:08.636541 1156512 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:42:08.636751 1156512 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 13:42:08.636775 1156512 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:42:08.639753 1156512 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:42:08.640200 1156512 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:42:08.640230 1156512 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:42:08.640426 1156512 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:42:08.640615 1156512 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:42:08.640821 1156512 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:42:08.640999 1156512 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:42:08.736221 1156512 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 13:42:08.784341 1156512 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 13:42:08.826982 1156512 main.go:141] libmachine: Stopping "no-preload-537236"...
	I0318 13:42:08.827017 1156512 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:42:08.828700 1156512 main.go:141] libmachine: (no-preload-537236) Calling .Stop
	I0318 13:42:08.832259 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 0/120
	I0318 13:42:09.833614 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 1/120
	I0318 13:42:10.834778 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 2/120
	I0318 13:42:11.836511 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 3/120
	I0318 13:42:12.837675 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 4/120
	I0318 13:42:13.839133 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 5/120
	I0318 13:42:14.840427 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 6/120
	I0318 13:42:15.841856 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 7/120
	I0318 13:42:16.843121 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 8/120
	I0318 13:42:17.844654 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 9/120
	I0318 13:42:18.847063 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 10/120
	I0318 13:42:19.849048 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 11/120
	I0318 13:42:20.850456 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 12/120
	I0318 13:42:21.851915 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 13/120
	I0318 13:42:22.853381 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 14/120
	I0318 13:42:23.855209 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 15/120
	I0318 13:42:24.856605 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 16/120
	I0318 13:42:25.858102 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 17/120
	I0318 13:42:26.859520 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 18/120
	I0318 13:42:27.860953 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 19/120
	I0318 13:42:28.863176 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 20/120
	I0318 13:42:29.864570 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 21/120
	I0318 13:42:30.867086 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 22/120
	I0318 13:42:31.869000 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 23/120
	I0318 13:42:32.870904 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 24/120
	I0318 13:42:33.872868 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 25/120
	I0318 13:42:34.874321 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 26/120
	I0318 13:42:35.875839 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 27/120
	I0318 13:42:36.877226 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 28/120
	I0318 13:42:37.878567 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 29/120
	I0318 13:42:38.880662 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 30/120
	I0318 13:42:39.881960 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 31/120
	I0318 13:42:40.883354 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 32/120
	I0318 13:42:41.884689 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 33/120
	I0318 13:42:42.886086 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 34/120
	I0318 13:42:43.887986 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 35/120
	I0318 13:42:44.889506 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 36/120
	I0318 13:42:45.891397 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 37/120
	I0318 13:42:46.892908 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 38/120
	I0318 13:42:47.894792 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 39/120
	I0318 13:42:48.897259 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 40/120
	I0318 13:42:49.898874 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 41/120
	I0318 13:42:50.900566 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 42/120
	I0318 13:42:51.901954 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 43/120
	I0318 13:42:52.904226 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 44/120
	I0318 13:42:53.906192 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 45/120
	I0318 13:42:54.907555 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 46/120
	I0318 13:42:55.909376 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 47/120
	I0318 13:42:56.910777 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 48/120
	I0318 13:42:57.912368 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 49/120
	I0318 13:42:58.914006 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 50/120
	I0318 13:42:59.915348 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 51/120
	I0318 13:43:00.917404 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 52/120
	I0318 13:43:01.918817 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 53/120
	I0318 13:43:02.920048 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 54/120
	I0318 13:43:03.921844 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 55/120
	I0318 13:43:04.923167 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 56/120
	I0318 13:43:05.924464 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 57/120
	I0318 13:43:06.925802 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 58/120
	I0318 13:43:07.927169 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 59/120
	I0318 13:43:08.928996 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 60/120
	I0318 13:43:09.930277 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 61/120
	I0318 13:43:10.931697 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 62/120
	I0318 13:43:11.932990 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 63/120
	I0318 13:43:12.934447 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 64/120
	I0318 13:43:13.936392 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 65/120
	I0318 13:43:14.937862 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 66/120
	I0318 13:43:15.939185 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 67/120
	I0318 13:43:16.941255 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 68/120
	I0318 13:43:17.942672 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 69/120
	I0318 13:43:18.944855 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 70/120
	I0318 13:43:19.946193 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 71/120
	I0318 13:43:20.947463 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 72/120
	I0318 13:43:21.949011 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 73/120
	I0318 13:43:22.950722 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 74/120
	I0318 13:43:23.952531 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 75/120
	I0318 13:43:24.953825 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 76/120
	I0318 13:43:25.955179 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 77/120
	I0318 13:43:26.956501 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 78/120
	I0318 13:43:27.957801 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 79/120
	I0318 13:43:28.960018 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 80/120
	I0318 13:43:29.961540 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 81/120
	I0318 13:43:30.962989 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 82/120
	I0318 13:43:31.964438 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 83/120
	I0318 13:43:32.965803 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 84/120
	I0318 13:43:33.967994 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 85/120
	I0318 13:43:34.969305 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 86/120
	I0318 13:43:35.970874 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 87/120
	I0318 13:43:36.972230 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 88/120
	I0318 13:43:37.973744 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 89/120
	I0318 13:43:38.976173 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 90/120
	I0318 13:43:39.977707 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 91/120
	I0318 13:43:40.978988 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 92/120
	I0318 13:43:41.980438 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 93/120
	I0318 13:43:42.981835 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 94/120
	I0318 13:43:43.983795 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 95/120
	I0318 13:43:44.985241 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 96/120
	I0318 13:43:45.986626 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 97/120
	I0318 13:43:46.987980 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 98/120
	I0318 13:43:47.989304 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 99/120
	I0318 13:43:48.991348 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 100/120
	I0318 13:43:49.992689 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 101/120
	I0318 13:43:50.993969 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 102/120
	I0318 13:43:51.995363 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 103/120
	I0318 13:43:52.996345 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 104/120
	I0318 13:43:53.997839 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 105/120
	I0318 13:43:54.999094 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 106/120
	I0318 13:43:56.000507 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 107/120
	I0318 13:43:57.001816 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 108/120
	I0318 13:43:58.003092 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 109/120
	I0318 13:43:59.005270 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 110/120
	I0318 13:44:00.006708 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 111/120
	I0318 13:44:01.008188 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 112/120
	I0318 13:44:02.009537 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 113/120
	I0318 13:44:03.010966 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 114/120
	I0318 13:44:04.013001 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 115/120
	I0318 13:44:05.014380 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 116/120
	I0318 13:44:06.015835 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 117/120
	I0318 13:44:07.017281 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 118/120
	I0318 13:44:08.018860 1156512 main.go:141] libmachine: (no-preload-537236) Waiting for machine to stop 119/120
	I0318 13:44:09.019468 1156512 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 13:44:09.019525 1156512 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 13:44:09.021647 1156512 out.go:177] 
	W0318 13:44:09.023390 1156512 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 13:44:09.023408 1156512 out.go:239] * 
	* 
	W0318 13:44:09.027881 1156512 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:44:09.029429 1156512 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-537236 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-537236 -n no-preload-537236
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-537236 -n no-preload-537236: exit status 3 (18.469850359s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:44:27.500647 1157119 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.7:22: connect: no route to host
	E0318 13:44:27.500668 1157119 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-537236" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-569210 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-569210 --alsologtostderr -v=3: exit status 82 (2m0.51332409s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-569210"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:43:10.282025 1156764 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:43:10.282286 1156764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:43:10.282296 1156764 out.go:304] Setting ErrFile to fd 2...
	I0318 13:43:10.282300 1156764 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:43:10.282505 1156764 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:43:10.282734 1156764 out.go:298] Setting JSON to false
	I0318 13:43:10.282854 1156764 mustload.go:65] Loading cluster: default-k8s-diff-port-569210
	I0318 13:43:10.283208 1156764 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:43:10.283272 1156764 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:43:10.283432 1156764 mustload.go:65] Loading cluster: default-k8s-diff-port-569210
	I0318 13:43:10.283526 1156764 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:43:10.283558 1156764 stop.go:39] StopHost: default-k8s-diff-port-569210
	I0318 13:43:10.283946 1156764 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:43:10.283985 1156764 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:43:10.298944 1156764 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44151
	I0318 13:43:10.299404 1156764 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:43:10.299971 1156764 main.go:141] libmachine: Using API Version  1
	I0318 13:43:10.299997 1156764 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:43:10.300385 1156764 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:43:10.302888 1156764 out.go:177] * Stopping node "default-k8s-diff-port-569210"  ...
	I0318 13:43:10.304733 1156764 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0318 13:43:10.304768 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:43:10.304993 1156764 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0318 13:43:10.305021 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:43:10.307497 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:43:10.307988 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:43:10.308043 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:43:10.308149 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:43:10.308320 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:43:10.308467 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:43:10.308589 1156764 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:43:10.402970 1156764 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0318 13:43:10.468306 1156764 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0318 13:43:10.531232 1156764 main.go:141] libmachine: Stopping "default-k8s-diff-port-569210"...
	I0318 13:43:10.531273 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:43:10.533069 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Stop
	I0318 13:43:10.536807 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 0/120
	I0318 13:43:11.538219 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 1/120
	I0318 13:43:12.539836 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 2/120
	I0318 13:43:13.541168 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 3/120
	I0318 13:43:14.542462 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 4/120
	I0318 13:43:15.544475 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 5/120
	I0318 13:43:16.546939 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 6/120
	I0318 13:43:17.548164 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 7/120
	I0318 13:43:18.549590 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 8/120
	I0318 13:43:19.550967 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 9/120
	I0318 13:43:20.553359 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 10/120
	I0318 13:43:21.554765 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 11/120
	I0318 13:43:22.556397 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 12/120
	I0318 13:43:23.557718 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 13/120
	I0318 13:43:24.559088 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 14/120
	I0318 13:43:25.560495 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 15/120
	I0318 13:43:26.561906 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 16/120
	I0318 13:43:27.563206 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 17/120
	I0318 13:43:28.564653 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 18/120
	I0318 13:43:29.566085 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 19/120
	I0318 13:43:30.568434 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 20/120
	I0318 13:43:31.569926 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 21/120
	I0318 13:43:32.571210 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 22/120
	I0318 13:43:33.572693 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 23/120
	I0318 13:43:34.574018 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 24/120
	I0318 13:43:35.576194 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 25/120
	I0318 13:43:36.577738 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 26/120
	I0318 13:43:37.579062 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 27/120
	I0318 13:43:38.580460 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 28/120
	I0318 13:43:39.582192 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 29/120
	I0318 13:43:40.584162 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 30/120
	I0318 13:43:41.585370 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 31/120
	I0318 13:43:42.586759 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 32/120
	I0318 13:43:43.588081 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 33/120
	I0318 13:43:44.589526 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 34/120
	I0318 13:43:45.591618 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 35/120
	I0318 13:43:46.593084 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 36/120
	I0318 13:43:47.594561 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 37/120
	I0318 13:43:48.596015 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 38/120
	I0318 13:43:49.597506 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 39/120
	I0318 13:43:50.599785 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 40/120
	I0318 13:43:51.601127 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 41/120
	I0318 13:43:52.602620 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 42/120
	I0318 13:43:53.604079 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 43/120
	I0318 13:43:54.605565 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 44/120
	I0318 13:43:55.607465 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 45/120
	I0318 13:43:56.608935 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 46/120
	I0318 13:43:57.610282 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 47/120
	I0318 13:43:58.611709 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 48/120
	I0318 13:43:59.613132 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 49/120
	I0318 13:44:00.615203 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 50/120
	I0318 13:44:01.616566 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 51/120
	I0318 13:44:02.617830 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 52/120
	I0318 13:44:03.619143 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 53/120
	I0318 13:44:04.620705 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 54/120
	I0318 13:44:05.622598 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 55/120
	I0318 13:44:06.624084 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 56/120
	I0318 13:44:07.625485 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 57/120
	I0318 13:44:08.626740 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 58/120
	I0318 13:44:09.627958 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 59/120
	I0318 13:44:10.630028 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 60/120
	I0318 13:44:11.631232 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 61/120
	I0318 13:44:12.632524 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 62/120
	I0318 13:44:13.633911 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 63/120
	I0318 13:44:14.634892 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 64/120
	I0318 13:44:15.636828 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 65/120
	I0318 13:44:16.638177 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 66/120
	I0318 13:44:17.639344 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 67/120
	I0318 13:44:18.640774 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 68/120
	I0318 13:44:19.642011 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 69/120
	I0318 13:44:20.644245 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 70/120
	I0318 13:44:21.645633 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 71/120
	I0318 13:44:22.647064 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 72/120
	I0318 13:44:23.648618 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 73/120
	I0318 13:44:24.650260 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 74/120
	I0318 13:44:25.652428 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 75/120
	I0318 13:44:26.653795 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 76/120
	I0318 13:44:27.655077 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 77/120
	I0318 13:44:28.656536 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 78/120
	I0318 13:44:29.657961 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 79/120
	I0318 13:44:30.660154 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 80/120
	I0318 13:44:31.661487 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 81/120
	I0318 13:44:32.662775 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 82/120
	I0318 13:44:33.664354 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 83/120
	I0318 13:44:34.665678 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 84/120
	I0318 13:44:35.667520 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 85/120
	I0318 13:44:36.668872 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 86/120
	I0318 13:44:37.670074 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 87/120
	I0318 13:44:38.671489 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 88/120
	I0318 13:44:39.672976 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 89/120
	I0318 13:44:40.675175 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 90/120
	I0318 13:44:41.676787 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 91/120
	I0318 13:44:42.678021 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 92/120
	I0318 13:44:43.679256 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 93/120
	I0318 13:44:44.680503 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 94/120
	I0318 13:44:45.682267 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 95/120
	I0318 13:44:46.683778 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 96/120
	I0318 13:44:47.685093 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 97/120
	I0318 13:44:48.686306 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 98/120
	I0318 13:44:49.687438 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 99/120
	I0318 13:44:50.689544 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 100/120
	I0318 13:44:51.691010 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 101/120
	I0318 13:44:52.692290 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 102/120
	I0318 13:44:53.693811 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 103/120
	I0318 13:44:54.695457 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 104/120
	I0318 13:44:55.697475 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 105/120
	I0318 13:44:56.699070 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 106/120
	I0318 13:44:57.700401 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 107/120
	I0318 13:44:58.701895 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 108/120
	I0318 13:44:59.703321 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 109/120
	I0318 13:45:00.705509 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 110/120
	I0318 13:45:01.706944 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 111/120
	I0318 13:45:02.708269 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 112/120
	I0318 13:45:03.709630 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 113/120
	I0318 13:45:04.711031 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 114/120
	I0318 13:45:05.713370 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 115/120
	I0318 13:45:06.714716 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 116/120
	I0318 13:45:07.715795 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 117/120
	I0318 13:45:08.717116 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 118/120
	I0318 13:45:09.718531 1156764 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for machine to stop 119/120
	I0318 13:45:10.719426 1156764 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0318 13:45:10.719478 1156764 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0318 13:45:10.721860 1156764 out.go:177] 
	W0318 13:45:10.723739 1156764 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0318 13:45:10.723749 1156764 out.go:239] * 
	* 
	W0318 13:45:10.728202 1156764 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:45:10.729580 1156764 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-569210 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210: exit status 3 (18.465123668s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:45:29.196705 1157609 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host
	E0318 13:45:29.196728 1157609 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-569210" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-909137 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-909137 create -f testdata/busybox.yaml: exit status 1 (43.859028ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-909137" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-909137 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 6 (230.557489ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:43:16.590172 1156835 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-909137" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-909137" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 6 (234.31822ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:43:16.825384 1156865 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-909137" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-909137" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (109.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-909137 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-909137 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m49.283731454s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-909137 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-909137 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-909137 describe deploy/metrics-server -n kube-system: exit status 1 (46.654954ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-909137" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-909137 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 6 (245.30358ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:45:06.400949 1157535 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-909137" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-909137" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (109.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036: exit status 3 (3.167693192s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:44:14.540708 1157149 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.191:22: connect: no route to host
	E0318 13:44:14.540733 1157149 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.191:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-173036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-173036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155491195s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.191:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-173036 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036: exit status 3 (3.060464877s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:44:23.756702 1157222 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.191:22: connect: no route to host
	E0318 13:44:23.756728 1157222 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.191:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-173036" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-537236 -n no-preload-537236
E0318 13:44:30.297350 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-537236 -n no-preload-537236: exit status 3 (3.167179746s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:44:30.668650 1157304 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.7:22: connect: no route to host
	E0318 13:44:30.668685 1157304 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.7:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-537236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-537236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155114236s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.7:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-537236 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-537236 -n no-preload-537236
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-537236 -n no-preload-537236: exit status 3 (3.06078303s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:44:39.884769 1157375 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.7:22: connect: no route to host
	E0318 13:44:39.884794 1157375 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-537236" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (765.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-909137 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-909137 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m42.367185102s)

                                                
                                                
-- stdout --
	* [old-k8s-version-909137] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-909137" primary control-plane node in "old-k8s-version-909137" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-909137" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:45:12.991062 1157708 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:45:12.991338 1157708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:12.991349 1157708 out.go:304] Setting ErrFile to fd 2...
	I0318 13:45:12.991353 1157708 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:12.991523 1157708 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:45:12.992086 1157708 out.go:298] Setting JSON to false
	I0318 13:45:12.993087 1157708 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19660,"bootTime":1710749853,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:45:12.993154 1157708 start.go:139] virtualization: kvm guest
	I0318 13:45:12.995310 1157708 out.go:177] * [old-k8s-version-909137] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:45:12.996725 1157708 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:45:12.996740 1157708 notify.go:220] Checking for updates...
	I0318 13:45:12.999698 1157708 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:45:13.001149 1157708 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:45:13.002481 1157708 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:45:13.004093 1157708 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:45:13.005474 1157708 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:45:13.007075 1157708 config.go:182] Loaded profile config "old-k8s-version-909137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 13:45:13.007475 1157708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:13.007524 1157708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:13.022444 1157708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0318 13:45:13.022884 1157708 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:13.023468 1157708 main.go:141] libmachine: Using API Version  1
	I0318 13:45:13.023490 1157708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:13.023810 1157708 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:13.023977 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:45:13.025947 1157708 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0318 13:45:13.027507 1157708 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:45:13.027793 1157708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:13.027827 1157708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:13.042303 1157708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37229
	I0318 13:45:13.042738 1157708 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:13.043140 1157708 main.go:141] libmachine: Using API Version  1
	I0318 13:45:13.043157 1157708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:13.043478 1157708 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:13.043657 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:45:13.077111 1157708 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:45:13.078345 1157708 start.go:297] selected driver: kvm2
	I0318 13:45:13.078363 1157708 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:13.078477 1157708 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:45:13.079155 1157708 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:13.079221 1157708 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:45:13.094009 1157708 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:45:13.094488 1157708 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:45:13.094579 1157708 cni.go:84] Creating CNI manager for ""
	I0318 13:45:13.094597 1157708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:45:13.094657 1157708 start.go:340] cluster config:
	{Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:13.094785 1157708 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:13.096688 1157708 out.go:177] * Starting "old-k8s-version-909137" primary control-plane node in "old-k8s-version-909137" cluster
	I0318 13:45:13.097892 1157708 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:45:13.097931 1157708 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 13:45:13.097947 1157708 cache.go:56] Caching tarball of preloaded images
	I0318 13:45:13.098031 1157708 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:45:13.098043 1157708 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 13:45:13.098161 1157708 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json ...
	I0318 13:45:13.098368 1157708 start.go:360] acquireMachinesLock for old-k8s-version-909137: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:49:23.501689 1157708 start.go:364] duration metric: took 4m10.403284517s to acquireMachinesLock for "old-k8s-version-909137"
	I0318 13:49:23.501769 1157708 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:23.501783 1157708 fix.go:54] fixHost starting: 
	I0318 13:49:23.502238 1157708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:23.502279 1157708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:23.520223 1157708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0318 13:49:23.520696 1157708 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:23.521273 1157708 main.go:141] libmachine: Using API Version  1
	I0318 13:49:23.521304 1157708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:23.521693 1157708 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:23.521934 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:23.522089 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetState
	I0318 13:49:23.523696 1157708 fix.go:112] recreateIfNeeded on old-k8s-version-909137: state=Stopped err=<nil>
	I0318 13:49:23.523738 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	W0318 13:49:23.523894 1157708 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:23.526253 1157708 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-909137" ...
	I0318 13:49:23.527811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .Start
	I0318 13:49:23.528000 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring networks are active...
	I0318 13:49:23.528714 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network default is active
	I0318 13:49:23.529036 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network mk-old-k8s-version-909137 is active
	I0318 13:49:23.529491 1157708 main.go:141] libmachine: (old-k8s-version-909137) Getting domain xml...
	I0318 13:49:23.530324 1157708 main.go:141] libmachine: (old-k8s-version-909137) Creating domain...
	I0318 13:49:24.765648 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting to get IP...
	I0318 13:49:24.766664 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:24.767122 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:24.767182 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:24.767081 1158507 retry.go:31] will retry after 250.785143ms: waiting for machine to come up
	I0318 13:49:25.019755 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.020238 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.020273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.020185 1158507 retry.go:31] will retry after 346.894257ms: waiting for machine to come up
	I0318 13:49:25.368815 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.369335 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.369372 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.369268 1158507 retry.go:31] will retry after 367.316359ms: waiting for machine to come up
	I0318 13:49:25.737835 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.738404 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.738438 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.738337 1158507 retry.go:31] will retry after 479.291041ms: waiting for machine to come up
	I0318 13:49:26.219103 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.219568 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.219599 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.219523 1158507 retry.go:31] will retry after 552.309382ms: waiting for machine to come up
	I0318 13:49:26.773363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.773905 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.773935 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.773857 1158507 retry.go:31] will retry after 703.087388ms: waiting for machine to come up
	I0318 13:49:27.478730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:27.479330 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:27.479363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:27.479270 1158507 retry.go:31] will retry after 1.136606935s: waiting for machine to come up
	I0318 13:49:28.617273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:28.617711 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:28.617740 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:28.617665 1158507 retry.go:31] will retry after 947.818334ms: waiting for machine to come up
	I0318 13:49:29.566814 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:29.567157 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:29.567177 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:29.567121 1158507 retry.go:31] will retry after 1.328243934s: waiting for machine to come up
	I0318 13:49:30.897514 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:30.898041 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:30.898068 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:30.897988 1158507 retry.go:31] will retry after 2.213855703s: waiting for machine to come up
	I0318 13:49:33.113781 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:33.114303 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:33.114332 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:33.114245 1158507 retry.go:31] will retry after 2.075415123s: waiting for machine to come up
	I0318 13:49:35.191096 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:35.191631 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:35.191665 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:35.191582 1158507 retry.go:31] will retry after 3.520577528s: waiting for machine to come up
	I0318 13:49:38.713777 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:38.714129 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:38.714242 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:38.714143 1158507 retry.go:31] will retry after 3.46520277s: waiting for machine to come up
	I0318 13:49:42.181399 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181856 1157708 main.go:141] libmachine: (old-k8s-version-909137) Found IP for machine: 192.168.72.135
	I0318 13:49:42.181888 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has current primary IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181897 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserving static IP address...
	I0318 13:49:42.182344 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.182387 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | skip adding static IP to network mk-old-k8s-version-909137 - found existing host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"}
	I0318 13:49:42.182424 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserved static IP address: 192.168.72.135
	I0318 13:49:42.182453 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting for SSH to be available...
	I0318 13:49:42.182470 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Getting to WaitForSSH function...
	I0318 13:49:42.184589 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.184958 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.184999 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.185061 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH client type: external
	I0318 13:49:42.185120 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa (-rw-------)
	I0318 13:49:42.185162 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:42.185189 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | About to run SSH command:
	I0318 13:49:42.185204 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | exit 0
	I0318 13:49:42.312570 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:42.313005 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetConfigRaw
	I0318 13:49:42.313693 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.316497 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.316931 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.316965 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.317239 1157708 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json ...
	I0318 13:49:42.317442 1157708 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:42.317462 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:42.317688 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.320076 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320444 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.320485 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320655 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.320818 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.320980 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.321093 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.321257 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.321510 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.321528 1157708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:42.433138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:42.433186 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433524 1157708 buildroot.go:166] provisioning hostname "old-k8s-version-909137"
	I0318 13:49:42.433558 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433808 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.436869 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437230 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.437264 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437506 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.437739 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.437915 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.438092 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.438285 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.438513 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.438534 1157708 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-909137 && echo "old-k8s-version-909137" | sudo tee /etc/hostname
	I0318 13:49:42.560410 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-909137
	
	I0318 13:49:42.560439 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.563304 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563637 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.563673 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563837 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.564053 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564236 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564377 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.564581 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.564802 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.564820 1157708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-909137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-909137/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-909137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:42.687138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:42.687173 1157708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:42.687199 1157708 buildroot.go:174] setting up certificates
	I0318 13:49:42.687211 1157708 provision.go:84] configureAuth start
	I0318 13:49:42.687223 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.687600 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.690738 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.691179 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691316 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.693730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694070 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.694092 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694255 1157708 provision.go:143] copyHostCerts
	I0318 13:49:42.694336 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:42.694350 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:42.694422 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:42.694597 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:42.694614 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:42.694652 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:42.694747 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:42.694756 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:42.694775 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:42.694823 1157708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-909137 san=[127.0.0.1 192.168.72.135 localhost minikube old-k8s-version-909137]
	I0318 13:49:42.920182 1157708 provision.go:177] copyRemoteCerts
	I0318 13:49:42.920255 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:42.920295 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.923074 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923374 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.923408 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923533 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.923755 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.923957 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.924095 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.007024 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:43.033952 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 13:49:43.060218 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:49:43.086087 1157708 provision.go:87] duration metric: took 398.861833ms to configureAuth
	I0318 13:49:43.086116 1157708 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:43.086326 1157708 config.go:182] Loaded profile config "old-k8s-version-909137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 13:49:43.086442 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.089200 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089534 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.089562 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089758 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.089965 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090134 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090286 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.090501 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.090718 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.090744 1157708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:43.401681 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:43.401715 1157708 machine.go:97] duration metric: took 1.084258164s to provisionDockerMachine
	I0318 13:49:43.401728 1157708 start.go:293] postStartSetup for "old-k8s-version-909137" (driver="kvm2")
	I0318 13:49:43.401739 1157708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:43.401759 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.402073 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:43.402116 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.404775 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405164 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.405192 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405335 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.405525 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.405740 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.405884 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.493000 1157708 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:43.497705 1157708 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:43.497740 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:43.497818 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:43.497931 1157708 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:43.498058 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:43.509185 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:43.535401 1157708 start.go:296] duration metric: took 133.657179ms for postStartSetup
	I0318 13:49:43.535454 1157708 fix.go:56] duration metric: took 20.033670705s for fixHost
	I0318 13:49:43.535482 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.538464 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.538964 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.538998 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.539178 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.539386 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539528 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539702 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.539899 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.540120 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.540133 1157708 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0318 13:49:43.649578 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769783.596310102
	
	I0318 13:49:43.649610 1157708 fix.go:216] guest clock: 1710769783.596310102
	I0318 13:49:43.649621 1157708 fix.go:229] Guest: 2024-03-18 13:49:43.596310102 +0000 UTC Remote: 2024-03-18 13:49:43.535459129 +0000 UTC m=+270.592972067 (delta=60.850973ms)
	I0318 13:49:43.649656 1157708 fix.go:200] guest clock delta is within tolerance: 60.850973ms
	I0318 13:49:43.649663 1157708 start.go:83] releasing machines lock for "old-k8s-version-909137", held for 20.147918331s
	I0318 13:49:43.649689 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.650002 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:43.652712 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653114 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.653148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653278 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.653873 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654112 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654198 1157708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:43.654264 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.654333 1157708 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:43.654369 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.657281 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657390 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657741 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.657830 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657855 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657918 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.658016 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658065 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.658199 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658245 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658326 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.658411 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658574 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.737787 1157708 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:43.769157 1157708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:43.920376 1157708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:43.928165 1157708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:43.928253 1157708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:43.946102 1157708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:43.946133 1157708 start.go:494] detecting cgroup driver to use...
	I0318 13:49:43.946210 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:43.963482 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:43.978540 1157708 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:43.978613 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:43.999525 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:44.021242 1157708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:44.198165 1157708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:44.363408 1157708 docker.go:233] disabling docker service ...
	I0318 13:49:44.363474 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:44.383527 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:44.398888 1157708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:44.547711 1157708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:44.662762 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:44.678786 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:44.702931 1157708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 13:49:44.703004 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.721453 1157708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:44.721519 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.739487 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.757379 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.777508 1157708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:44.798788 1157708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:44.814280 1157708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:44.814383 1157708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:44.836507 1157708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:44.852614 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:44.994352 1157708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:45.184815 1157708 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:45.184907 1157708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:45.190649 1157708 start.go:562] Will wait 60s for crictl version
	I0318 13:49:45.190724 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:45.195265 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:45.242737 1157708 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:45.242850 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.288154 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.331441 1157708 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 13:49:45.332975 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:45.336274 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336701 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:45.336753 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336985 1157708 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:45.343147 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:45.361840 1157708 kubeadm.go:877] updating cluster {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:45.361982 1157708 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:49:45.362040 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:45.419490 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:45.419587 1157708 ssh_runner.go:195] Run: which lz4
	I0318 13:49:45.424689 1157708 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0318 13:49:45.431110 1157708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:49:45.431155 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 13:49:47.510385 1157708 crio.go:444] duration metric: took 2.085724633s to copy over tarball
	I0318 13:49:47.510483 1157708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:49:50.947045 1157708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.436514023s)
	I0318 13:49:50.947084 1157708 crio.go:451] duration metric: took 3.436661543s to extract the tarball
	I0318 13:49:50.947095 1157708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:49:51.007406 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:51.048060 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:51.048091 1157708 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:51.048181 1157708 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.048228 1157708 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.048287 1157708 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.048346 1157708 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 13:49:51.048398 1157708 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.048432 1157708 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.048232 1157708 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.048183 1157708 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.049960 1157708 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.050268 1157708 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.050288 1157708 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.050355 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.050594 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.050627 1157708 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 13:49:51.050584 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.051230 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.219906 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.220734 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.235283 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.236445 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.246700 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 13:49:51.251299 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.311054 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.311292 1157708 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 13:49:51.311336 1157708 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.311389 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.343594 1157708 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 13:49:51.343649 1157708 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.343739 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.391608 1157708 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 13:49:51.391657 1157708 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.391706 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.448987 1157708 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 13:49:51.449029 1157708 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 13:49:51.449058 1157708 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.449061 1157708 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 13:49:51.449088 1157708 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.449103 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449035 1157708 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 13:49:51.449135 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.449178 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449207 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.449245 1157708 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 13:49:51.449267 1157708 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.449317 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449210 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449223 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.469614 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.469613 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.562455 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 13:49:51.562506 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.564170 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 13:49:51.564269 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 13:49:51.578471 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 13:49:51.615689 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 13:49:51.615708 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 13:49:51.657287 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 13:49:51.657361 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 13:49:51.956746 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:52.106933 1157708 cache_images.go:92] duration metric: took 1.058823514s to LoadCachedImages
	W0318 13:49:52.107046 1157708 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0318 13:49:52.107064 1157708 kubeadm.go:928] updating node { 192.168.72.135 8443 v1.20.0 crio true true} ...
	I0318 13:49:52.107259 1157708 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-909137 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:52.107348 1157708 ssh_runner.go:195] Run: crio config
	I0318 13:49:52.163493 1157708 cni.go:84] Creating CNI manager for ""
	I0318 13:49:52.163526 1157708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:52.163546 1157708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:52.163572 1157708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.135 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-909137 NodeName:old-k8s-version-909137 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 13:49:52.163740 1157708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-909137"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:52.163818 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 13:49:52.175668 1157708 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:52.175740 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:52.186745 1157708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 13:49:52.209877 1157708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:49:52.232921 1157708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 13:49:52.256571 1157708 ssh_runner.go:195] Run: grep 192.168.72.135	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:52.262776 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:52.278435 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:52.422705 1157708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:52.443710 1157708 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137 for IP: 192.168.72.135
	I0318 13:49:52.443740 1157708 certs.go:194] generating shared ca certs ...
	I0318 13:49:52.443760 1157708 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:52.443951 1157708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:52.444009 1157708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:52.444023 1157708 certs.go:256] generating profile certs ...
	I0318 13:49:52.444155 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.key
	I0318 13:49:52.444239 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key.e9806bd6
	I0318 13:49:52.444303 1157708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key
	I0318 13:49:52.444492 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:52.444532 1157708 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:52.444548 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:52.444585 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:52.444633 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:52.444672 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:52.444729 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:52.445363 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:52.506720 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:52.550057 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:52.586845 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:52.627933 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 13:49:52.681479 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:52.722052 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:52.755021 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:52.782181 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:52.808269 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:52.835041 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:52.863776 1157708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:52.883579 1157708 ssh_runner.go:195] Run: openssl version
	I0318 13:49:52.889846 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:52.902288 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908241 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908302 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.915392 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:52.928374 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:52.941444 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946463 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946514 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.953447 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:52.966231 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:52.977986 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982748 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982809 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.988715 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:53.000141 1157708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:53.005021 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:53.011156 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:53.018329 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:53.025687 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:53.032199 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:53.039048 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:53.045789 1157708 kubeadm.go:391] StartCluster: {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:53.045882 1157708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:53.045931 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.085682 1157708 cri.go:89] found id: ""
	I0318 13:49:53.085788 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:53.098063 1157708 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:53.098091 1157708 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:53.098098 1157708 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:53.098153 1157708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:53.109692 1157708 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:53.110853 1157708 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-909137" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:49:53.111862 1157708 kubeconfig.go:62] /home/jenkins/minikube-integration/18429-1106816/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-909137" cluster setting kubeconfig missing "old-k8s-version-909137" context setting]
	I0318 13:49:53.113334 1157708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:53.115135 1157708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:53.125910 1157708 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.135
	I0318 13:49:53.125949 1157708 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:53.125965 1157708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:53.126029 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.172181 1157708 cri.go:89] found id: ""
	I0318 13:49:53.172268 1157708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:53.189585 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:53.200744 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:53.200768 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:53.200811 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:53.211176 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:53.211250 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:53.221744 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:53.231342 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:53.231404 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:53.242162 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.252408 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:53.252480 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.262690 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:53.272829 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:53.272903 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:53.283287 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:53.294124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:53.437482 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.297415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.588919 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.758204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.863030 1157708 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:54.863140 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.363708 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.863301 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.364064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.863896 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.363240 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:58.363294 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:58.864051 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.363586 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.863802 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.363862 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.864277 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.363381 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.864307 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.363278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.863315 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:03.363591 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:03.864049 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.363310 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.863306 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.363706 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.863618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.364183 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.863776 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.863261 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:08.364243 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:08.863539 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.364037 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.863422 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.363353 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.863485 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.363548 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.864070 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.364111 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.863871 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.363958 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.863570 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.364185 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.863974 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.364010 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.863484 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.864149 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:18.363366 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:18.863782 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.363987 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.863437 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.364050 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.863961 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.364126 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.863264 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.363519 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:23.364019 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:23.864134 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.363510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.863263 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.364027 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.863203 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.364219 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.863262 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.363889 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.864113 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:28.364069 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:28.863405 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.363996 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.863574 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.363749 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.863564 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.363250 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.863320 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.363894 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.864166 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:33.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:33.864021 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.363963 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.864011 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.364122 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.863559 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.364154 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.364232 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.863934 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.363994 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.863278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.363665 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.863948 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.364081 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.864124 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.363964 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.863593 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.363750 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.864002 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:43.364189 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:43.863868 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.363454 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.863940 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.363913 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.863288 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.363884 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.863361 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.363383 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.864064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:48.363218 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:48.864086 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.363457 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.863292 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.363308 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.863428 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.363583 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.863562 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.363995 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.863463 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:53.363919 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:53.863936 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.363671 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.863567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:54.863709 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:54.911905 1157708 cri.go:89] found id: ""
	I0318 13:50:54.911942 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.911954 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:54.911962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:54.912031 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:54.962141 1157708 cri.go:89] found id: ""
	I0318 13:50:54.962170 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.962182 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:54.962188 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:54.962269 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:55.001597 1157708 cri.go:89] found id: ""
	I0318 13:50:55.001639 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.001652 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:55.001660 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:55.001725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:55.042660 1157708 cri.go:89] found id: ""
	I0318 13:50:55.042695 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.042708 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:55.042716 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:55.042775 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:55.082095 1157708 cri.go:89] found id: ""
	I0318 13:50:55.082128 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.082139 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:55.082146 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:55.082211 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:55.120938 1157708 cri.go:89] found id: ""
	I0318 13:50:55.120969 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.121000 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:55.121008 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:55.121081 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:55.159247 1157708 cri.go:89] found id: ""
	I0318 13:50:55.159280 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.159292 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:55.159300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:55.159366 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:55.200130 1157708 cri.go:89] found id: ""
	I0318 13:50:55.200161 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.200170 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:55.200180 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:55.200193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:55.254113 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:55.254154 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:55.268984 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:55.269027 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:55.402079 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:55.402106 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:55.402123 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:55.468627 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:55.468674 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:58.016860 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:58.031684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:58.031747 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:58.073389 1157708 cri.go:89] found id: ""
	I0318 13:50:58.073415 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.073427 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:58.073434 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:58.073497 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:58.114439 1157708 cri.go:89] found id: ""
	I0318 13:50:58.114471 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.114483 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:58.114490 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:58.114553 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:58.165440 1157708 cri.go:89] found id: ""
	I0318 13:50:58.165466 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.165476 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:58.165484 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:58.165569 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:58.207083 1157708 cri.go:89] found id: ""
	I0318 13:50:58.207117 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.207129 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:58.207137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:58.207227 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:58.252945 1157708 cri.go:89] found id: ""
	I0318 13:50:58.252973 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.252985 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:58.252993 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:58.253055 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:58.292437 1157708 cri.go:89] found id: ""
	I0318 13:50:58.292464 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.292474 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:58.292480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:58.292530 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:58.335359 1157708 cri.go:89] found id: ""
	I0318 13:50:58.335403 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.335415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:58.335423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:58.335511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:58.381434 1157708 cri.go:89] found id: ""
	I0318 13:50:58.381473 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.381484 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:58.381494 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:58.381511 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:58.432270 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:58.432319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:58.447658 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:58.447686 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:58.523163 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:58.523186 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:58.523207 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:58.599544 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:58.599586 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:01.141653 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:01.156996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:01.157070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:01.192720 1157708 cri.go:89] found id: ""
	I0318 13:51:01.192762 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.192775 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:01.192785 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:01.192866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:01.232678 1157708 cri.go:89] found id: ""
	I0318 13:51:01.232705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.232716 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:01.232723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:01.232795 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:01.270637 1157708 cri.go:89] found id: ""
	I0318 13:51:01.270666 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.270676 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:01.270684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:01.270746 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:01.308891 1157708 cri.go:89] found id: ""
	I0318 13:51:01.308921 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.308931 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:01.308939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:01.309003 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:01.349301 1157708 cri.go:89] found id: ""
	I0318 13:51:01.349334 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.349346 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:01.349354 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:01.349420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:01.394010 1157708 cri.go:89] found id: ""
	I0318 13:51:01.394039 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.394047 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:01.394053 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:01.394103 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:01.432778 1157708 cri.go:89] found id: ""
	I0318 13:51:01.432804 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.432815 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:01.432823 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:01.432886 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:01.471974 1157708 cri.go:89] found id: ""
	I0318 13:51:01.472002 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.472011 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:01.472022 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:01.472040 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:01.524855 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:01.524893 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:01.540939 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:01.540967 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:01.618318 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:01.618350 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:01.618367 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:01.695717 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:01.695755 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:04.241781 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:04.256276 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:04.256373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:04.297129 1157708 cri.go:89] found id: ""
	I0318 13:51:04.297158 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.297170 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:04.297179 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:04.297247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:04.341743 1157708 cri.go:89] found id: ""
	I0318 13:51:04.341774 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.341786 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:04.341793 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:04.341858 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:04.384400 1157708 cri.go:89] found id: ""
	I0318 13:51:04.384434 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.384445 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:04.384453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:04.384510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:04.425459 1157708 cri.go:89] found id: ""
	I0318 13:51:04.425487 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.425500 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:04.425510 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:04.425563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:04.463091 1157708 cri.go:89] found id: ""
	I0318 13:51:04.463125 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.463137 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:04.463145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:04.463210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:04.503023 1157708 cri.go:89] found id: ""
	I0318 13:51:04.503057 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.503069 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:04.503077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:04.503141 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:04.542083 1157708 cri.go:89] found id: ""
	I0318 13:51:04.542116 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.542127 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:04.542136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:04.542207 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:04.583097 1157708 cri.go:89] found id: ""
	I0318 13:51:04.583128 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.583137 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:04.583146 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:04.583161 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:04.650476 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:04.650518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:04.706073 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:04.706111 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:04.723595 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:04.723628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:04.800278 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:04.800301 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:04.800316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:07.388144 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:07.403636 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:07.403711 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:07.443337 1157708 cri.go:89] found id: ""
	I0318 13:51:07.443365 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.443379 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:07.443386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:07.443442 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:07.482417 1157708 cri.go:89] found id: ""
	I0318 13:51:07.482453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.482462 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:07.482469 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:07.482521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:07.518445 1157708 cri.go:89] found id: ""
	I0318 13:51:07.518474 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.518485 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:07.518493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:07.518563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:07.555628 1157708 cri.go:89] found id: ""
	I0318 13:51:07.555661 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.555673 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:07.555681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:07.555760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:07.593805 1157708 cri.go:89] found id: ""
	I0318 13:51:07.593842 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.593856 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:07.593873 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:07.593936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:07.638206 1157708 cri.go:89] found id: ""
	I0318 13:51:07.638234 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.638242 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:07.638249 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:07.638313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:07.679526 1157708 cri.go:89] found id: ""
	I0318 13:51:07.679561 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.679573 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:07.679581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:07.679635 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:07.724468 1157708 cri.go:89] found id: ""
	I0318 13:51:07.724494 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.724504 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:07.724516 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:07.724533 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:07.766491 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:07.766522 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:07.823782 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:07.823833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:07.839316 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:07.839342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:07.924790 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:07.924821 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:07.924841 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:10.513618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:10.528711 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:10.528790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:10.571217 1157708 cri.go:89] found id: ""
	I0318 13:51:10.571254 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.571267 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:10.571275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:10.571335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:10.608096 1157708 cri.go:89] found id: ""
	I0318 13:51:10.608129 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.608140 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:10.608149 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:10.608217 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:10.649245 1157708 cri.go:89] found id: ""
	I0318 13:51:10.649274 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.649283 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:10.649290 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:10.649365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:10.693462 1157708 cri.go:89] found id: ""
	I0318 13:51:10.693495 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.693506 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:10.693515 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:10.693589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:10.740434 1157708 cri.go:89] found id: ""
	I0318 13:51:10.740464 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.740474 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:10.740480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:10.740543 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:10.781062 1157708 cri.go:89] found id: ""
	I0318 13:51:10.781099 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.781108 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:10.781114 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:10.781167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:10.828480 1157708 cri.go:89] found id: ""
	I0318 13:51:10.828513 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.828524 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:10.828532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:10.828605 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:10.868508 1157708 cri.go:89] found id: ""
	I0318 13:51:10.868535 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.868543 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:10.868553 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:10.868565 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:10.923925 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:10.923961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:10.939254 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:10.939283 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:11.031307 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:11.031334 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:11.031351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:11.121563 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:11.121618 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:13.681147 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:13.696705 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:13.696812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:13.740904 1157708 cri.go:89] found id: ""
	I0318 13:51:13.740937 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.740949 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:13.740957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:13.741038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:13.779625 1157708 cri.go:89] found id: ""
	I0318 13:51:13.779659 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.779672 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:13.779681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:13.779762 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:13.822183 1157708 cri.go:89] found id: ""
	I0318 13:51:13.822218 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.822231 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:13.822239 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:13.822302 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:13.873686 1157708 cri.go:89] found id: ""
	I0318 13:51:13.873728 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.873741 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:13.873749 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:13.873821 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:13.919772 1157708 cri.go:89] found id: ""
	I0318 13:51:13.919802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.919811 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:13.919817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:13.919874 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:13.958809 1157708 cri.go:89] found id: ""
	I0318 13:51:13.958837 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.958846 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:13.958852 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:13.958928 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:14.000537 1157708 cri.go:89] found id: ""
	I0318 13:51:14.000568 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.000580 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:14.000588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:14.000638 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:14.041234 1157708 cri.go:89] found id: ""
	I0318 13:51:14.041265 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.041275 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:14.041285 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:14.041299 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:14.085435 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:14.085462 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:14.144336 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:14.144374 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:14.159972 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:14.160000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:14.242027 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:14.242048 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:14.242061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:16.821805 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:16.840202 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:16.840272 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:16.898088 1157708 cri.go:89] found id: ""
	I0318 13:51:16.898120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.898129 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:16.898135 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:16.898203 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:16.953180 1157708 cri.go:89] found id: ""
	I0318 13:51:16.953209 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.953221 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:16.953229 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:16.953288 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:17.006995 1157708 cri.go:89] found id: ""
	I0318 13:51:17.007048 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.007062 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:17.007070 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:17.007136 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:17.049756 1157708 cri.go:89] found id: ""
	I0318 13:51:17.049798 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.049809 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:17.049817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:17.049885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:17.092026 1157708 cri.go:89] found id: ""
	I0318 13:51:17.092055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.092066 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:17.092074 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:17.092144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:17.137722 1157708 cri.go:89] found id: ""
	I0318 13:51:17.137756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.137769 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:17.137778 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:17.137875 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:17.180778 1157708 cri.go:89] found id: ""
	I0318 13:51:17.180808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.180816 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:17.180822 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:17.180885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:17.227629 1157708 cri.go:89] found id: ""
	I0318 13:51:17.227664 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.227675 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:17.227688 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:17.227706 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:17.272559 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:17.272588 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:17.333953 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:17.333994 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:17.349765 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:17.349793 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:17.434436 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:17.434465 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:17.434483 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:20.014314 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:20.031106 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:20.031172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:20.067727 1157708 cri.go:89] found id: ""
	I0318 13:51:20.067753 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.067765 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:20.067773 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:20.067844 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:20.108455 1157708 cri.go:89] found id: ""
	I0318 13:51:20.108482 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.108491 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:20.108497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:20.108563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:20.152257 1157708 cri.go:89] found id: ""
	I0318 13:51:20.152285 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.152310 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:20.152317 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:20.152394 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:20.191480 1157708 cri.go:89] found id: ""
	I0318 13:51:20.191509 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.191520 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:20.191529 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:20.191599 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:20.235677 1157708 cri.go:89] found id: ""
	I0318 13:51:20.235705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.235716 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:20.235723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:20.235796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:20.274794 1157708 cri.go:89] found id: ""
	I0318 13:51:20.274822 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.274833 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:20.274842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:20.274907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:20.321987 1157708 cri.go:89] found id: ""
	I0318 13:51:20.322019 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.322031 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:20.322040 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:20.322097 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:20.361292 1157708 cri.go:89] found id: ""
	I0318 13:51:20.361319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.361328 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:20.361338 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:20.361360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:20.434481 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:20.434509 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:20.434527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:20.518203 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:20.518244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:20.560241 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:20.560271 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:20.615489 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:20.615526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:23.132509 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:23.146447 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:23.146559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:23.189576 1157708 cri.go:89] found id: ""
	I0318 13:51:23.189613 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.189625 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:23.189634 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:23.189688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:23.229700 1157708 cri.go:89] found id: ""
	I0318 13:51:23.229731 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.229740 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:23.229747 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:23.229812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:23.272713 1157708 cri.go:89] found id: ""
	I0318 13:51:23.272747 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.272759 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:23.272768 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:23.272834 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:23.313988 1157708 cri.go:89] found id: ""
	I0318 13:51:23.314014 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.314022 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:23.314028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:23.314087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:23.360195 1157708 cri.go:89] found id: ""
	I0318 13:51:23.360230 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.360243 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:23.360251 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:23.360321 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:23.400657 1157708 cri.go:89] found id: ""
	I0318 13:51:23.400685 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.400694 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:23.400707 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:23.400760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:23.442841 1157708 cri.go:89] found id: ""
	I0318 13:51:23.442873 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.442893 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:23.442900 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:23.442970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:23.483467 1157708 cri.go:89] found id: ""
	I0318 13:51:23.483504 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.483516 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:23.483528 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:23.483545 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:23.538581 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:23.538616 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:23.555392 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:23.555421 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:23.634919 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:23.634945 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:23.634970 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:23.718098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:23.718144 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.270369 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:26.287165 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:26.287232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:26.331773 1157708 cri.go:89] found id: ""
	I0318 13:51:26.331807 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.331832 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:26.331850 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:26.331923 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:26.372067 1157708 cri.go:89] found id: ""
	I0318 13:51:26.372095 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.372102 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:26.372109 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:26.372182 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:26.411883 1157708 cri.go:89] found id: ""
	I0318 13:51:26.411910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.411919 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:26.411924 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:26.411980 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:26.449087 1157708 cri.go:89] found id: ""
	I0318 13:51:26.449122 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.449131 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:26.449137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:26.449188 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:26.492126 1157708 cri.go:89] found id: ""
	I0318 13:51:26.492162 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.492174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:26.492182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:26.492251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:26.529621 1157708 cri.go:89] found id: ""
	I0318 13:51:26.529656 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.529668 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:26.529677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:26.529764 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:26.568853 1157708 cri.go:89] found id: ""
	I0318 13:51:26.568888 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.568899 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:26.568907 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:26.568979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:26.607882 1157708 cri.go:89] found id: ""
	I0318 13:51:26.607917 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.607929 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:26.607942 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:26.607959 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.648736 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:26.648768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:26.704641 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:26.704684 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:26.720681 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:26.720715 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:26.799577 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:26.799608 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:26.799627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:29.389391 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:29.404122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:29.404195 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:29.446761 1157708 cri.go:89] found id: ""
	I0318 13:51:29.446787 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.446796 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:29.446803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:29.446857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:29.483974 1157708 cri.go:89] found id: ""
	I0318 13:51:29.484007 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.484020 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:29.484028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:29.484099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:29.521894 1157708 cri.go:89] found id: ""
	I0318 13:51:29.521922 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.521931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:29.521937 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:29.521993 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:29.562918 1157708 cri.go:89] found id: ""
	I0318 13:51:29.562948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.562957 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:29.562963 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:29.563017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:29.600372 1157708 cri.go:89] found id: ""
	I0318 13:51:29.600412 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.600424 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:29.600432 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:29.600500 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:29.638902 1157708 cri.go:89] found id: ""
	I0318 13:51:29.638933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.638945 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:29.638953 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:29.639019 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:29.679041 1157708 cri.go:89] found id: ""
	I0318 13:51:29.679071 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.679079 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:29.679085 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:29.679142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:29.719168 1157708 cri.go:89] found id: ""
	I0318 13:51:29.719201 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.719213 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:29.719224 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:29.719244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:29.764050 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:29.764077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:29.822136 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:29.822174 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:29.839485 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:29.839515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:29.914984 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:29.915006 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:29.915023 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:32.497388 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:32.512151 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:32.512215 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:32.549566 1157708 cri.go:89] found id: ""
	I0318 13:51:32.549602 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.549614 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:32.549623 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:32.549693 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:32.588516 1157708 cri.go:89] found id: ""
	I0318 13:51:32.588546 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.588555 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:32.588562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:32.588615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:32.628425 1157708 cri.go:89] found id: ""
	I0318 13:51:32.628453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.628462 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:32.628470 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:32.628546 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:32.670851 1157708 cri.go:89] found id: ""
	I0318 13:51:32.670874 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.670888 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:32.670895 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:32.670944 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:32.709614 1157708 cri.go:89] found id: ""
	I0318 13:51:32.709642 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.709656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:32.709666 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:32.709738 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:32.749774 1157708 cri.go:89] found id: ""
	I0318 13:51:32.749808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.749819 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:32.749828 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:32.749896 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:32.789502 1157708 cri.go:89] found id: ""
	I0318 13:51:32.789525 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.789534 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:32.789540 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:32.789589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:32.834926 1157708 cri.go:89] found id: ""
	I0318 13:51:32.834948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.834956 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:32.834965 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:32.834980 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:32.887365 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:32.887404 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:32.903584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:32.903610 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:32.978924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:32.978958 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:32.978988 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:33.055386 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:33.055424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:35.603881 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:35.618083 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:35.618167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:35.659760 1157708 cri.go:89] found id: ""
	I0318 13:51:35.659802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.659814 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:35.659820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:35.659881 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:35.703521 1157708 cri.go:89] found id: ""
	I0318 13:51:35.703570 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.703582 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:35.703589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:35.703651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:35.744411 1157708 cri.go:89] found id: ""
	I0318 13:51:35.744444 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.744455 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:35.744463 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:35.744548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:35.783704 1157708 cri.go:89] found id: ""
	I0318 13:51:35.783735 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.783746 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:35.783754 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:35.783819 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:35.824000 1157708 cri.go:89] found id: ""
	I0318 13:51:35.824031 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.824042 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:35.824049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:35.824117 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:35.860260 1157708 cri.go:89] found id: ""
	I0318 13:51:35.860289 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.860299 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:35.860308 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:35.860388 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:35.895154 1157708 cri.go:89] found id: ""
	I0318 13:51:35.895189 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.895201 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:35.895209 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:35.895276 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:35.936916 1157708 cri.go:89] found id: ""
	I0318 13:51:35.936942 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.936951 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:35.936961 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:35.936977 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:35.951715 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:35.951745 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:36.027431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:36.027457 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:36.027474 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:36.113339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:36.113386 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:36.160132 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:36.160170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:38.711710 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:38.726104 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:38.726162 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:38.763251 1157708 cri.go:89] found id: ""
	I0318 13:51:38.763281 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.763291 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:38.763300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:38.763364 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:38.802521 1157708 cri.go:89] found id: ""
	I0318 13:51:38.802548 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.802556 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:38.802562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:38.802616 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:38.843778 1157708 cri.go:89] found id: ""
	I0318 13:51:38.843817 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.843831 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:38.843839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:38.843909 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:38.884966 1157708 cri.go:89] found id: ""
	I0318 13:51:38.885003 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.885015 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:38.885024 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:38.885090 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:38.925653 1157708 cri.go:89] found id: ""
	I0318 13:51:38.925681 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.925690 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:38.925696 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:38.925757 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:38.964126 1157708 cri.go:89] found id: ""
	I0318 13:51:38.964156 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.964169 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:38.964177 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:38.964228 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:39.004864 1157708 cri.go:89] found id: ""
	I0318 13:51:39.004898 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.004910 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:39.004919 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:39.004991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:39.041555 1157708 cri.go:89] found id: ""
	I0318 13:51:39.041588 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.041600 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:39.041611 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:39.041626 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:39.092984 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:39.093019 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:39.110492 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:39.110526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:39.186785 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:39.186848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:39.186872 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:39.272847 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:39.272891 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:41.829404 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:41.843407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:41.843479 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:41.883129 1157708 cri.go:89] found id: ""
	I0318 13:51:41.883164 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.883175 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:41.883184 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:41.883246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:41.924083 1157708 cri.go:89] found id: ""
	I0318 13:51:41.924123 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.924136 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:41.924144 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:41.924209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:41.963029 1157708 cri.go:89] found id: ""
	I0318 13:51:41.963058 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.963069 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:41.963084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:41.963155 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:42.003393 1157708 cri.go:89] found id: ""
	I0318 13:51:42.003430 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.003442 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:42.003450 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:42.003511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:42.041938 1157708 cri.go:89] found id: ""
	I0318 13:51:42.041968 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.041977 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:42.041983 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:42.042044 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:42.079685 1157708 cri.go:89] found id: ""
	I0318 13:51:42.079718 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.079731 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:42.079740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:42.079805 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:42.118112 1157708 cri.go:89] found id: ""
	I0318 13:51:42.118144 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.118156 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:42.118164 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:42.118230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:42.157287 1157708 cri.go:89] found id: ""
	I0318 13:51:42.157319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.157331 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:42.157343 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:42.157360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:42.213006 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:42.213038 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:42.228452 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:42.228481 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:42.302523 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:42.302545 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:42.302558 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:42.387994 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:42.388062 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:44.934501 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:44.949163 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:44.949245 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:44.991885 1157708 cri.go:89] found id: ""
	I0318 13:51:44.991914 1157708 logs.go:276] 0 containers: []
	W0318 13:51:44.991924 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:44.991931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:44.992008 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:45.029868 1157708 cri.go:89] found id: ""
	I0318 13:51:45.029904 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.029915 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:45.029922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:45.030017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:45.067755 1157708 cri.go:89] found id: ""
	I0318 13:51:45.067785 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.067794 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:45.067803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:45.067857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:45.106296 1157708 cri.go:89] found id: ""
	I0318 13:51:45.106323 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.106333 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:45.106339 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:45.106405 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:45.145746 1157708 cri.go:89] found id: ""
	I0318 13:51:45.145784 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.145797 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:45.145805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:45.145868 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:45.191960 1157708 cri.go:89] found id: ""
	I0318 13:51:45.191998 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.192010 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:45.192019 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:45.192089 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:45.231436 1157708 cri.go:89] found id: ""
	I0318 13:51:45.231470 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.231483 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:45.231491 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:45.231559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:45.274521 1157708 cri.go:89] found id: ""
	I0318 13:51:45.274554 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.274565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:45.274577 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:45.274595 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:45.338539 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:45.338580 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:45.353917 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:45.353947 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:45.447734 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:45.447755 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:45.447768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:45.530098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:45.530140 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:48.077992 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:48.092203 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:48.092273 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:48.133136 1157708 cri.go:89] found id: ""
	I0318 13:51:48.133172 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.133183 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:48.133191 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:48.133259 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:48.177727 1157708 cri.go:89] found id: ""
	I0318 13:51:48.177756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.177768 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:48.177775 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:48.177843 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:48.217574 1157708 cri.go:89] found id: ""
	I0318 13:51:48.217600 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.217608 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:48.217614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:48.217676 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:48.258900 1157708 cri.go:89] found id: ""
	I0318 13:51:48.258933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.258947 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:48.258955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:48.259046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:48.299527 1157708 cri.go:89] found id: ""
	I0318 13:51:48.299562 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.299573 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:48.299581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:48.299650 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:48.339692 1157708 cri.go:89] found id: ""
	I0318 13:51:48.339723 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.339732 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:48.339740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:48.339791 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:48.378737 1157708 cri.go:89] found id: ""
	I0318 13:51:48.378764 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.378773 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:48.378779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:48.378841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:48.414593 1157708 cri.go:89] found id: ""
	I0318 13:51:48.414621 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.414629 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:48.414639 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:48.414654 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:48.430232 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:48.430264 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:48.513313 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:48.513335 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:48.513353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:48.594681 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:48.594721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:48.638681 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:48.638720 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.189510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:51.204296 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:51.204383 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:51.248285 1157708 cri.go:89] found id: ""
	I0318 13:51:51.248311 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.248331 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:51.248340 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:51.248414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:51.289022 1157708 cri.go:89] found id: ""
	I0318 13:51:51.289055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.289068 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:51.289077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:51.289144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:51.329367 1157708 cri.go:89] found id: ""
	I0318 13:51:51.329405 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.329414 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:51.329420 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:51.329477 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:51.370909 1157708 cri.go:89] found id: ""
	I0318 13:51:51.370948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.370960 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:51.370970 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:51.371043 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:51.419447 1157708 cri.go:89] found id: ""
	I0318 13:51:51.419486 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.419498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:51.419506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:51.419573 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:51.466302 1157708 cri.go:89] found id: ""
	I0318 13:51:51.466336 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.466348 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:51.466356 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:51.466441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:51.505593 1157708 cri.go:89] found id: ""
	I0318 13:51:51.505631 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.505644 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:51.505652 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:51.505724 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:51.543815 1157708 cri.go:89] found id: ""
	I0318 13:51:51.543843 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.543852 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:51.543863 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:51.543885 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.596271 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:51.596305 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:51.612441 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:51.612477 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:51.690591 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:51.690614 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:51.690631 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:51.771781 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:51.771821 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:54.319626 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:54.334041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:54.334113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:54.372090 1157708 cri.go:89] found id: ""
	I0318 13:51:54.372120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.372132 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:54.372139 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:54.372196 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:54.412513 1157708 cri.go:89] found id: ""
	I0318 13:51:54.412567 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.412580 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:54.412588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:54.412662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:54.453143 1157708 cri.go:89] found id: ""
	I0318 13:51:54.453176 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.453188 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:54.453196 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:54.453262 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:54.497908 1157708 cri.go:89] found id: ""
	I0318 13:51:54.497940 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.497949 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:54.497957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:54.498025 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:54.539044 1157708 cri.go:89] found id: ""
	I0318 13:51:54.539072 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.539081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:54.539086 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:54.539151 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:54.578916 1157708 cri.go:89] found id: ""
	I0318 13:51:54.578944 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.578951 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:54.578958 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:54.579027 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:54.617339 1157708 cri.go:89] found id: ""
	I0318 13:51:54.617366 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.617375 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:54.617380 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:54.617436 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:54.661288 1157708 cri.go:89] found id: ""
	I0318 13:51:54.661309 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.661318 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:54.661328 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:54.661344 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:54.740710 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:54.740751 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:54.789136 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:54.789176 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.844585 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:54.844627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:54.860304 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:54.860351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:54.945305 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:57.445800 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:57.459294 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:57.459368 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:57.497411 1157708 cri.go:89] found id: ""
	I0318 13:51:57.497441 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.497449 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:57.497456 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:57.497521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:57.535629 1157708 cri.go:89] found id: ""
	I0318 13:51:57.535663 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.535675 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:57.535684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:57.535749 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:57.572980 1157708 cri.go:89] found id: ""
	I0318 13:51:57.573008 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.573017 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:57.573023 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:57.573071 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:57.622949 1157708 cri.go:89] found id: ""
	I0318 13:51:57.622984 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.622997 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:57.623005 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:57.623070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:57.659877 1157708 cri.go:89] found id: ""
	I0318 13:51:57.659910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.659921 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:57.659928 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:57.659991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:57.705399 1157708 cri.go:89] found id: ""
	I0318 13:51:57.705481 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.705495 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:57.705504 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:57.705566 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:57.748035 1157708 cri.go:89] found id: ""
	I0318 13:51:57.748062 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.748073 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:57.748084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:57.748144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:57.801942 1157708 cri.go:89] found id: ""
	I0318 13:51:57.801976 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.801987 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:57.801999 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:57.802017 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:57.900157 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:57.900204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:57.946179 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:57.946219 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:58.000369 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:58.000412 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:58.016179 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:58.016211 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:58.101766 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:00.602151 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:00.617466 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:00.617531 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:00.661294 1157708 cri.go:89] found id: ""
	I0318 13:52:00.661328 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.661336 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:00.661342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:00.661400 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:00.706227 1157708 cri.go:89] found id: ""
	I0318 13:52:00.706257 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.706267 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:00.706275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:00.706342 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:00.746482 1157708 cri.go:89] found id: ""
	I0318 13:52:00.746515 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.746528 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:00.746536 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:00.746600 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:00.789242 1157708 cri.go:89] found id: ""
	I0318 13:52:00.789272 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.789281 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:00.789287 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:00.789348 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:00.832463 1157708 cri.go:89] found id: ""
	I0318 13:52:00.832503 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.832514 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:00.832522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:00.832581 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:00.869790 1157708 cri.go:89] found id: ""
	I0318 13:52:00.869819 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.869830 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:00.869839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:00.869904 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:00.909656 1157708 cri.go:89] found id: ""
	I0318 13:52:00.909685 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.909693 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:00.909700 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:00.909754 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:00.953818 1157708 cri.go:89] found id: ""
	I0318 13:52:00.953856 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.953868 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:00.953882 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:00.953898 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:01.032822 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:01.032848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:01.032865 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:01.111701 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:01.111747 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:01.168270 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:01.168300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:01.220376 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:01.220408 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:03.737354 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:03.756282 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:03.756382 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:03.804716 1157708 cri.go:89] found id: ""
	I0318 13:52:03.804757 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.804768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:03.804777 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:03.804838 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:03.864559 1157708 cri.go:89] found id: ""
	I0318 13:52:03.864596 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.864609 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:03.864617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:03.864687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:03.918397 1157708 cri.go:89] found id: ""
	I0318 13:52:03.918425 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.918433 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:03.918439 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:03.918504 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:03.961729 1157708 cri.go:89] found id: ""
	I0318 13:52:03.961762 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.961773 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:03.961780 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:03.961856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:04.006261 1157708 cri.go:89] found id: ""
	I0318 13:52:04.006299 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.006311 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:04.006319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:04.006404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:04.050284 1157708 cri.go:89] found id: ""
	I0318 13:52:04.050313 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.050321 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:04.050327 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:04.050384 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:04.093789 1157708 cri.go:89] found id: ""
	I0318 13:52:04.093827 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.093839 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:04.093847 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:04.093916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:04.135047 1157708 cri.go:89] found id: ""
	I0318 13:52:04.135091 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.135110 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:04.135124 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:04.135142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:04.192899 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:04.192937 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:04.209080 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:04.209130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:04.286388 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:04.286413 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:04.286428 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:04.371836 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:04.371877 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:06.923039 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:06.938743 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:06.938826 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:06.984600 1157708 cri.go:89] found id: ""
	I0318 13:52:06.984634 1157708 logs.go:276] 0 containers: []
	W0318 13:52:06.984646 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:06.984655 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:06.984721 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:07.023849 1157708 cri.go:89] found id: ""
	I0318 13:52:07.023891 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.023914 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:07.023922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:07.023984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:07.071972 1157708 cri.go:89] found id: ""
	I0318 13:52:07.072002 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.072015 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:07.072022 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:07.072087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:07.109070 1157708 cri.go:89] found id: ""
	I0318 13:52:07.109105 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.109118 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:07.109126 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:07.109183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:07.149879 1157708 cri.go:89] found id: ""
	I0318 13:52:07.149910 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.149918 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:07.149925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:07.149990 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:07.195946 1157708 cri.go:89] found id: ""
	I0318 13:52:07.195976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.195987 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:07.195995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:07.196062 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:07.238126 1157708 cri.go:89] found id: ""
	I0318 13:52:07.238152 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.238162 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:07.238168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:07.238233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:07.278218 1157708 cri.go:89] found id: ""
	I0318 13:52:07.278255 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.278268 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:07.278282 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:07.278300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:07.294926 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:07.294955 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:07.383431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:07.383455 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:07.383468 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:07.467306 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:07.467348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:07.515996 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:07.516028 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:10.071945 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:10.088587 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:10.088654 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:10.130528 1157708 cri.go:89] found id: ""
	I0318 13:52:10.130566 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.130579 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:10.130588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:10.130663 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:10.173113 1157708 cri.go:89] found id: ""
	I0318 13:52:10.173150 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.173168 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:10.173178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:10.173243 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:10.218941 1157708 cri.go:89] found id: ""
	I0318 13:52:10.218976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.218987 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:10.218996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:10.219068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:10.262331 1157708 cri.go:89] found id: ""
	I0318 13:52:10.262368 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.262381 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:10.262389 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:10.262460 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:10.303329 1157708 cri.go:89] found id: ""
	I0318 13:52:10.303363 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.303378 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:10.303386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:10.303457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:10.344458 1157708 cri.go:89] found id: ""
	I0318 13:52:10.344486 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.344497 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:10.344505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:10.344567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:10.386753 1157708 cri.go:89] found id: ""
	I0318 13:52:10.386786 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.386797 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:10.386806 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:10.386876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:10.425922 1157708 cri.go:89] found id: ""
	I0318 13:52:10.425954 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.425965 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:10.425978 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:10.426000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:10.441134 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:10.441168 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:10.514865 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:10.514899 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:10.514916 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:10.592061 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:10.592105 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:10.642900 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:10.642935 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:13.199176 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:13.215155 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:13.215232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:13.256107 1157708 cri.go:89] found id: ""
	I0318 13:52:13.256139 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.256151 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:13.256160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:13.256231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:13.296562 1157708 cri.go:89] found id: ""
	I0318 13:52:13.296597 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.296608 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:13.296615 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:13.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:13.336633 1157708 cri.go:89] found id: ""
	I0318 13:52:13.336662 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.336672 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:13.336678 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:13.336737 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:13.382597 1157708 cri.go:89] found id: ""
	I0318 13:52:13.382639 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.382654 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:13.382663 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:13.382733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:13.430257 1157708 cri.go:89] found id: ""
	I0318 13:52:13.430292 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.430304 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:13.430312 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:13.430373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:13.466854 1157708 cri.go:89] found id: ""
	I0318 13:52:13.466881 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.466889 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:13.466896 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:13.466945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:13.510297 1157708 cri.go:89] found id: ""
	I0318 13:52:13.510333 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.510344 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:13.510352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:13.510420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:13.551476 1157708 cri.go:89] found id: ""
	I0318 13:52:13.551508 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.551517 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:13.551528 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:13.551542 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:13.634561 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:13.634585 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:13.634598 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:13.720088 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:13.720129 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:13.760621 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:13.760659 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:13.817311 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:13.817350 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.334094 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:16.349779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:16.349866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:16.394131 1157708 cri.go:89] found id: ""
	I0318 13:52:16.394157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.394167 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:16.394175 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:16.394239 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:16.438185 1157708 cri.go:89] found id: ""
	I0318 13:52:16.438232 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.438245 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:16.438264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:16.438335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:16.476872 1157708 cri.go:89] found id: ""
	I0318 13:52:16.476920 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.476932 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:16.476939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:16.477007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:16.518226 1157708 cri.go:89] found id: ""
	I0318 13:52:16.518253 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.518262 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:16.518269 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:16.518327 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:16.559119 1157708 cri.go:89] found id: ""
	I0318 13:52:16.559160 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.559174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:16.559182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:16.559260 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:16.600050 1157708 cri.go:89] found id: ""
	I0318 13:52:16.600079 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.600088 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:16.600094 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:16.600160 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:16.640621 1157708 cri.go:89] found id: ""
	I0318 13:52:16.640649 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.640660 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:16.640668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:16.640733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:16.680541 1157708 cri.go:89] found id: ""
	I0318 13:52:16.680571 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.680580 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:16.680590 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:16.680602 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:16.766378 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:16.766415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:16.811846 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:16.811883 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:16.871940 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:16.871981 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.887494 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:16.887521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:16.961924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:19.462316 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:19.478819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:19.478885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:19.523280 1157708 cri.go:89] found id: ""
	I0318 13:52:19.523314 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.523334 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:19.523342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:19.523417 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:19.560675 1157708 cri.go:89] found id: ""
	I0318 13:52:19.560708 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.560717 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:19.560725 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:19.560790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:19.598739 1157708 cri.go:89] found id: ""
	I0318 13:52:19.598766 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.598773 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:19.598781 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:19.598846 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:19.639928 1157708 cri.go:89] found id: ""
	I0318 13:52:19.639960 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.639969 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:19.639975 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:19.640030 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:19.686084 1157708 cri.go:89] found id: ""
	I0318 13:52:19.686134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.686153 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:19.686160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:19.686231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:19.725449 1157708 cri.go:89] found id: ""
	I0318 13:52:19.725481 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.725491 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:19.725497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:19.725559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:19.763855 1157708 cri.go:89] found id: ""
	I0318 13:52:19.763886 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.763897 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:19.763905 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:19.763976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:19.805783 1157708 cri.go:89] found id: ""
	I0318 13:52:19.805813 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.805824 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:19.805836 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:19.805852 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.883873 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:19.883914 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:19.926368 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:19.926406 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:19.981137 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:19.981181 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:19.996242 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:19.996269 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:20.077880 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:22.578045 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:22.594170 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:22.594247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:22.637241 1157708 cri.go:89] found id: ""
	I0318 13:52:22.637276 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.637289 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:22.637298 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:22.637363 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:22.679877 1157708 cri.go:89] found id: ""
	I0318 13:52:22.679904 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.679912 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:22.679918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:22.679981 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:22.721865 1157708 cri.go:89] found id: ""
	I0318 13:52:22.721890 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.721903 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:22.721912 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:22.721982 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:22.763208 1157708 cri.go:89] found id: ""
	I0318 13:52:22.763242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.763255 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:22.763264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:22.763329 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:22.802038 1157708 cri.go:89] found id: ""
	I0318 13:52:22.802071 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.802081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:22.802089 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:22.802170 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:22.841206 1157708 cri.go:89] found id: ""
	I0318 13:52:22.841242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.841254 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:22.841263 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:22.841328 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:22.885159 1157708 cri.go:89] found id: ""
	I0318 13:52:22.885197 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.885209 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:22.885218 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:22.885289 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:22.925346 1157708 cri.go:89] found id: ""
	I0318 13:52:22.925373 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.925382 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:22.925391 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:22.925407 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:23.006158 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:23.006193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:23.053932 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:23.053961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:23.107728 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:23.107768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:23.125708 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:23.125740 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:23.202609 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:25.703096 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:25.718617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:25.718689 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:25.756504 1157708 cri.go:89] found id: ""
	I0318 13:52:25.756530 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.756538 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:25.756544 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:25.756608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:25.795103 1157708 cri.go:89] found id: ""
	I0318 13:52:25.795140 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.795152 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:25.795160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:25.795240 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:25.839908 1157708 cri.go:89] found id: ""
	I0318 13:52:25.839945 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.839957 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:25.839971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:25.840038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:25.881677 1157708 cri.go:89] found id: ""
	I0318 13:52:25.881711 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.881723 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:25.881732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:25.881802 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:25.923356 1157708 cri.go:89] found id: ""
	I0318 13:52:25.923386 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.923397 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:25.923410 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:25.923469 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:25.961661 1157708 cri.go:89] found id: ""
	I0318 13:52:25.961693 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.961705 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:25.961713 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:25.961785 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:26.003198 1157708 cri.go:89] found id: ""
	I0318 13:52:26.003236 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.003248 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:26.003256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:26.003319 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:26.041436 1157708 cri.go:89] found id: ""
	I0318 13:52:26.041471 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.041483 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:26.041496 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:26.041515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:26.056679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:26.056716 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:26.143900 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:26.143926 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:26.143946 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:26.226929 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:26.226964 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:26.288519 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:26.288560 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:28.846205 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:28.861117 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:28.861190 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:28.906990 1157708 cri.go:89] found id: ""
	I0318 13:52:28.907022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.907030 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:28.907036 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:28.907099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:28.946271 1157708 cri.go:89] found id: ""
	I0318 13:52:28.946309 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.946322 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:28.946332 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:28.946403 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:28.990158 1157708 cri.go:89] found id: ""
	I0318 13:52:28.990185 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.990193 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:28.990199 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:28.990251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:29.035089 1157708 cri.go:89] found id: ""
	I0318 13:52:29.035123 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.035134 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:29.035143 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:29.035209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:29.076991 1157708 cri.go:89] found id: ""
	I0318 13:52:29.077022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.077033 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:29.077041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:29.077104 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:29.117106 1157708 cri.go:89] found id: ""
	I0318 13:52:29.117134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.117150 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:29.117157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:29.117209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:29.159675 1157708 cri.go:89] found id: ""
	I0318 13:52:29.159704 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.159714 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:29.159722 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:29.159787 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:29.202130 1157708 cri.go:89] found id: ""
	I0318 13:52:29.202157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.202166 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:29.202176 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:29.202189 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:29.258343 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:29.258390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:29.275314 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:29.275360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:29.359842 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:29.359989 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:29.360036 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:29.446021 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:29.446072 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:31.990431 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:32.007443 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:32.007508 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:32.051028 1157708 cri.go:89] found id: ""
	I0318 13:52:32.051061 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.051070 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:32.051076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:32.051144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:32.092914 1157708 cri.go:89] found id: ""
	I0318 13:52:32.092950 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.092962 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:32.092972 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:32.093045 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:32.154257 1157708 cri.go:89] found id: ""
	I0318 13:52:32.154291 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.154302 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:32.154309 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:32.154375 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:32.200185 1157708 cri.go:89] found id: ""
	I0318 13:52:32.200224 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.200236 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:32.200244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:32.200309 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:32.248927 1157708 cri.go:89] found id: ""
	I0318 13:52:32.248961 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.248974 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:32.248982 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:32.249051 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:32.289829 1157708 cri.go:89] found id: ""
	I0318 13:52:32.289861 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.289870 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:32.289876 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:32.289934 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:32.334346 1157708 cri.go:89] found id: ""
	I0318 13:52:32.334379 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.334387 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:32.334393 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:32.334457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:32.378718 1157708 cri.go:89] found id: ""
	I0318 13:52:32.378761 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.378770 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:32.378780 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:32.378795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:32.434626 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:32.434667 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:32.451366 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:32.451402 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:32.532868 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:32.532907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:32.532924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:32.617556 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:32.617597 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:35.165067 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:35.181325 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:35.181404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:35.220570 1157708 cri.go:89] found id: ""
	I0318 13:52:35.220601 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.220612 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:35.220619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:35.220684 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:35.263798 1157708 cri.go:89] found id: ""
	I0318 13:52:35.263830 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.263841 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:35.263848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:35.263915 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:35.309447 1157708 cri.go:89] found id: ""
	I0318 13:52:35.309477 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.309489 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:35.309497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:35.309567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:35.353444 1157708 cri.go:89] found id: ""
	I0318 13:52:35.353472 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.353484 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:35.353493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:35.353556 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:35.394563 1157708 cri.go:89] found id: ""
	I0318 13:52:35.394591 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.394599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:35.394604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:35.394662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:35.433866 1157708 cri.go:89] found id: ""
	I0318 13:52:35.433899 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.433908 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:35.433915 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:35.433970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:35.482769 1157708 cri.go:89] found id: ""
	I0318 13:52:35.482808 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.482820 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:35.482829 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:35.482899 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:35.521465 1157708 cri.go:89] found id: ""
	I0318 13:52:35.521498 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.521509 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:35.521520 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:35.521534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:35.577759 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:35.577799 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:35.593052 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:35.593084 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:35.672751 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:35.672773 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:35.672787 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:35.752118 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:35.752171 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:38.296677 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:38.312261 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:38.312365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:38.350328 1157708 cri.go:89] found id: ""
	I0318 13:52:38.350362 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.350374 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:38.350382 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:38.350457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:38.389891 1157708 cri.go:89] found id: ""
	I0318 13:52:38.389927 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.389939 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:38.389947 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:38.390005 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:38.430268 1157708 cri.go:89] found id: ""
	I0318 13:52:38.430296 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.430305 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:38.430311 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:38.430365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:38.470830 1157708 cri.go:89] found id: ""
	I0318 13:52:38.470859 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.470873 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:38.470880 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:38.470945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:38.510501 1157708 cri.go:89] found id: ""
	I0318 13:52:38.510538 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.510552 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:38.510560 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:38.510618 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:38.594899 1157708 cri.go:89] found id: ""
	I0318 13:52:38.594926 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.594935 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:38.594942 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:38.595021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:38.649095 1157708 cri.go:89] found id: ""
	I0318 13:52:38.649121 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.649129 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:38.649136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:38.649192 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:38.695263 1157708 cri.go:89] found id: ""
	I0318 13:52:38.695295 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.695307 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:38.695320 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:38.695336 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:38.780624 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:38.780666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:38.825294 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:38.825335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:38.877548 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:38.877596 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:38.893289 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:38.893319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:38.971752 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.472865 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:41.487371 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:41.487484 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:41.524691 1157708 cri.go:89] found id: ""
	I0318 13:52:41.524724 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.524737 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:41.524746 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:41.524812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:41.564094 1157708 cri.go:89] found id: ""
	I0318 13:52:41.564125 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.564137 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:41.564145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:41.564210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:41.600019 1157708 cri.go:89] found id: ""
	I0318 13:52:41.600047 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.600058 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:41.600064 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:41.600142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:41.638320 1157708 cri.go:89] found id: ""
	I0318 13:52:41.638350 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.638363 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:41.638372 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:41.638438 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:41.680763 1157708 cri.go:89] found id: ""
	I0318 13:52:41.680798 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.680810 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:41.680818 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:41.680894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:41.720645 1157708 cri.go:89] found id: ""
	I0318 13:52:41.720674 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.720683 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:41.720690 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:41.720741 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:41.759121 1157708 cri.go:89] found id: ""
	I0318 13:52:41.759151 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.759185 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:41.759195 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:41.759264 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:41.797006 1157708 cri.go:89] found id: ""
	I0318 13:52:41.797034 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.797043 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:41.797053 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:41.797070 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:41.853315 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:41.853353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:41.869920 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:41.869952 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:41.947187 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.947219 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:41.947235 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:42.025475 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:42.025515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:44.574724 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:44.598990 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:44.599068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:44.649051 1157708 cri.go:89] found id: ""
	I0318 13:52:44.649137 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.649168 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:44.649180 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:44.649254 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:44.686423 1157708 cri.go:89] found id: ""
	I0318 13:52:44.686459 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.686468 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:44.686473 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:44.686536 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:44.726534 1157708 cri.go:89] found id: ""
	I0318 13:52:44.726564 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.726575 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:44.726583 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:44.726653 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:44.771190 1157708 cri.go:89] found id: ""
	I0318 13:52:44.771220 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.771232 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:44.771240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:44.771311 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:44.811577 1157708 cri.go:89] found id: ""
	I0318 13:52:44.811602 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.811611 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:44.811618 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:44.811677 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:44.850717 1157708 cri.go:89] found id: ""
	I0318 13:52:44.850744 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.850756 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:44.850765 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:44.850824 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:44.890294 1157708 cri.go:89] found id: ""
	I0318 13:52:44.890321 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.890330 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:44.890344 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:44.890401 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:44.930690 1157708 cri.go:89] found id: ""
	I0318 13:52:44.930720 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.930730 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:44.930741 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:44.930757 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:44.946509 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:44.946544 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:45.029748 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:45.029777 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:45.029795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:45.111348 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:45.111392 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:45.165156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:45.165193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:47.720701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:47.734457 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:47.734520 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:47.771273 1157708 cri.go:89] found id: ""
	I0318 13:52:47.771304 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.771313 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:47.771319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:47.771370 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:47.813779 1157708 cri.go:89] found id: ""
	I0318 13:52:47.813806 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.813816 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:47.813824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:47.813892 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:47.855547 1157708 cri.go:89] found id: ""
	I0318 13:52:47.855576 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.855584 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:47.855590 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:47.855640 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:47.892651 1157708 cri.go:89] found id: ""
	I0318 13:52:47.892684 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.892692 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:47.892697 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:47.892752 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:47.935457 1157708 cri.go:89] found id: ""
	I0318 13:52:47.935488 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.935498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:47.935505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:47.935567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:47.969335 1157708 cri.go:89] found id: ""
	I0318 13:52:47.969361 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.969370 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:47.969377 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:47.969441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:48.007305 1157708 cri.go:89] found id: ""
	I0318 13:52:48.007339 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.007349 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:48.007355 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:48.007416 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:48.050230 1157708 cri.go:89] found id: ""
	I0318 13:52:48.050264 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.050276 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:48.050289 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:48.050304 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:48.106946 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:48.106993 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:48.123805 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:48.123837 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:48.201881 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:48.201907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:48.201920 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:48.281533 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:48.281577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:50.829561 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:50.847462 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:50.847555 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:50.889731 1157708 cri.go:89] found id: ""
	I0318 13:52:50.889759 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.889768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:50.889774 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:50.889831 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:50.928176 1157708 cri.go:89] found id: ""
	I0318 13:52:50.928210 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.928222 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:50.928231 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:50.928294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:50.965737 1157708 cri.go:89] found id: ""
	I0318 13:52:50.965772 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.965786 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:50.965794 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:50.965866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:51.008038 1157708 cri.go:89] found id: ""
	I0318 13:52:51.008072 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.008081 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:51.008087 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:51.008159 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:51.050310 1157708 cri.go:89] found id: ""
	I0318 13:52:51.050340 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.050355 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:51.050363 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:51.050431 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:51.090514 1157708 cri.go:89] found id: ""
	I0318 13:52:51.090541 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.090550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:51.090556 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:51.090608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:51.131278 1157708 cri.go:89] found id: ""
	I0318 13:52:51.131305 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.131313 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:51.131320 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:51.131381 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:51.173370 1157708 cri.go:89] found id: ""
	I0318 13:52:51.173400 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.173411 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:51.173437 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:51.173464 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:51.260155 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:51.260204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:51.309963 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:51.309998 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:51.367838 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:51.367889 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:51.382542 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:51.382570 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:51.459258 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:53.960212 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:53.978939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:53.979004 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:54.030003 1157708 cri.go:89] found id: ""
	I0318 13:52:54.030038 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.030052 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:54.030060 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:54.030134 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:54.073487 1157708 cri.go:89] found id: ""
	I0318 13:52:54.073523 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.073535 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:54.073543 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:54.073611 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:54.115982 1157708 cri.go:89] found id: ""
	I0318 13:52:54.116010 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.116022 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:54.116029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:54.116099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:54.158320 1157708 cri.go:89] found id: ""
	I0318 13:52:54.158348 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.158359 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:54.158366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:54.158433 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:54.198911 1157708 cri.go:89] found id: ""
	I0318 13:52:54.198939 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.198948 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:54.198955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:54.199010 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:54.240628 1157708 cri.go:89] found id: ""
	I0318 13:52:54.240659 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.240671 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:54.240679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:54.240750 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:54.279377 1157708 cri.go:89] found id: ""
	I0318 13:52:54.279409 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.279418 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:54.279424 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:54.279493 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:54.324160 1157708 cri.go:89] found id: ""
	I0318 13:52:54.324192 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.324205 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:54.324218 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:54.324237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:54.371487 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:54.371527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:54.423487 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:54.423526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:54.438773 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:54.438800 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:54.518788 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:54.518810 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:54.518825 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.103590 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:57.118866 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:57.118932 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:57.159354 1157708 cri.go:89] found id: ""
	I0318 13:52:57.159383 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.159393 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:57.159399 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:57.159458 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:57.201114 1157708 cri.go:89] found id: ""
	I0318 13:52:57.201148 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.201159 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:57.201167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:57.201233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:57.242172 1157708 cri.go:89] found id: ""
	I0318 13:52:57.242207 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.242217 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:57.242224 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:57.242287 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:57.282578 1157708 cri.go:89] found id: ""
	I0318 13:52:57.282617 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.282629 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:57.282637 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:57.282706 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:57.323682 1157708 cri.go:89] found id: ""
	I0318 13:52:57.323707 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.323715 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:57.323721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:57.323771 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:57.364946 1157708 cri.go:89] found id: ""
	I0318 13:52:57.364980 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.364991 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:57.365003 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:57.365076 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:57.407466 1157708 cri.go:89] found id: ""
	I0318 13:52:57.407495 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.407505 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:57.407511 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:57.407568 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:57.454663 1157708 cri.go:89] found id: ""
	I0318 13:52:57.454692 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.454701 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:57.454710 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:57.454722 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:57.509591 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:57.509633 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:57.525125 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:57.525155 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:57.602819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:57.602845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:57.602863 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.689001 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:57.689045 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:00.234252 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:00.249526 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:00.249615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:00.290131 1157708 cri.go:89] found id: ""
	I0318 13:53:00.290160 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.290171 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:00.290178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:00.290230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:00.337794 1157708 cri.go:89] found id: ""
	I0318 13:53:00.337828 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.337840 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:00.337848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:00.337907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:00.378188 1157708 cri.go:89] found id: ""
	I0318 13:53:00.378224 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.378236 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:00.378244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:00.378313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:00.418940 1157708 cri.go:89] found id: ""
	I0318 13:53:00.418972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.418981 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:00.418987 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:00.419039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:00.461471 1157708 cri.go:89] found id: ""
	I0318 13:53:00.461502 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.461511 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:00.461518 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:00.461572 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:00.498781 1157708 cri.go:89] found id: ""
	I0318 13:53:00.498812 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.498821 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:00.498827 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:00.498885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:00.540359 1157708 cri.go:89] found id: ""
	I0318 13:53:00.540395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.540407 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:00.540414 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:00.540480 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:00.583597 1157708 cri.go:89] found id: ""
	I0318 13:53:00.583628 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.583636 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:00.583648 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:00.583666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:00.639498 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:00.639534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:00.655764 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:00.655792 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:00.742351 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:00.742386 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:00.742400 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:00.825250 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:00.825298 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:03.373938 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:03.389723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:03.389796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:03.429675 1157708 cri.go:89] found id: ""
	I0318 13:53:03.429710 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.429723 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:03.429732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:03.429803 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:03.468732 1157708 cri.go:89] found id: ""
	I0318 13:53:03.468768 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.468780 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:03.468788 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:03.468841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:03.510562 1157708 cri.go:89] found id: ""
	I0318 13:53:03.510589 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.510598 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:03.510604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:03.510667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:03.549842 1157708 cri.go:89] found id: ""
	I0318 13:53:03.549896 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.549909 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:03.549918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:03.549984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:03.590036 1157708 cri.go:89] found id: ""
	I0318 13:53:03.590076 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.590086 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:03.590093 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:03.590146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:03.635546 1157708 cri.go:89] found id: ""
	I0318 13:53:03.635573 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.635585 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:03.635593 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:03.635660 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:03.678634 1157708 cri.go:89] found id: ""
	I0318 13:53:03.678663 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.678671 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:03.678677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:03.678735 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:03.719666 1157708 cri.go:89] found id: ""
	I0318 13:53:03.719698 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.719709 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:03.719721 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:03.719736 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:03.762353 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:03.762388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:03.817484 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:03.817521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:03.832820 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:03.832850 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:03.913094 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:03.913115 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:03.913130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:06.502556 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:06.517682 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:06.517745 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:06.562167 1157708 cri.go:89] found id: ""
	I0318 13:53:06.562202 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.562215 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:06.562223 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:06.562294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:06.601910 1157708 cri.go:89] found id: ""
	I0318 13:53:06.601945 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.601954 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:06.601962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:06.602022 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:06.640652 1157708 cri.go:89] found id: ""
	I0318 13:53:06.640683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.640694 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:06.640702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:06.640778 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:06.686781 1157708 cri.go:89] found id: ""
	I0318 13:53:06.686809 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.686818 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:06.686824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:06.686893 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:06.727080 1157708 cri.go:89] found id: ""
	I0318 13:53:06.727107 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.727115 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:06.727121 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:06.727173 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:06.764550 1157708 cri.go:89] found id: ""
	I0318 13:53:06.764575 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.764583 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:06.764589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:06.764641 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:06.803978 1157708 cri.go:89] found id: ""
	I0318 13:53:06.804009 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.804019 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:06.804027 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:06.804091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:06.843983 1157708 cri.go:89] found id: ""
	I0318 13:53:06.844016 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.844027 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:06.844040 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:06.844058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:06.905389 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:06.905424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:06.956888 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:06.956924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:06.973551 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:06.973594 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:07.045945 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:07.045973 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:07.045991 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:09.635227 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:09.650166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:09.650246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:09.695126 1157708 cri.go:89] found id: ""
	I0318 13:53:09.695153 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.695162 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:09.695168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:09.695221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:09.740475 1157708 cri.go:89] found id: ""
	I0318 13:53:09.740507 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.740516 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:09.740522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:09.740591 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:09.779078 1157708 cri.go:89] found id: ""
	I0318 13:53:09.779108 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.779119 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:09.779128 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:09.779186 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:09.821252 1157708 cri.go:89] found id: ""
	I0318 13:53:09.821285 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.821297 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:09.821306 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:09.821376 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:09.860500 1157708 cri.go:89] found id: ""
	I0318 13:53:09.860537 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.860550 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:09.860558 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:09.860622 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:09.903447 1157708 cri.go:89] found id: ""
	I0318 13:53:09.903475 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.903486 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:09.903494 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:09.903550 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:09.941620 1157708 cri.go:89] found id: ""
	I0318 13:53:09.941648 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.941661 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:09.941679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:09.941731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:09.980066 1157708 cri.go:89] found id: ""
	I0318 13:53:09.980101 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.980113 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:09.980125 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:09.980142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:10.036960 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:10.037000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:10.051329 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:10.051361 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:10.130896 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:10.130925 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:10.130942 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:10.212205 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:10.212236 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:12.754623 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:12.769956 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:12.770034 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:12.809006 1157708 cri.go:89] found id: ""
	I0318 13:53:12.809032 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.809043 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:12.809051 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:12.809113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:12.852354 1157708 cri.go:89] found id: ""
	I0318 13:53:12.852390 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.852400 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:12.852407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:12.852476 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:12.891891 1157708 cri.go:89] found id: ""
	I0318 13:53:12.891923 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.891933 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:12.891940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:12.891991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:12.931753 1157708 cri.go:89] found id: ""
	I0318 13:53:12.931785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.931795 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:12.931803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:12.931872 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:12.971622 1157708 cri.go:89] found id: ""
	I0318 13:53:12.971653 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.971662 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:12.971669 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:12.971731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:13.009893 1157708 cri.go:89] found id: ""
	I0318 13:53:13.009930 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.009943 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:13.009952 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:13.010021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:13.045361 1157708 cri.go:89] found id: ""
	I0318 13:53:13.045396 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.045404 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:13.045411 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:13.045474 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:13.087659 1157708 cri.go:89] found id: ""
	I0318 13:53:13.087686 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.087696 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:13.087706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:13.087721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:13.129979 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:13.130014 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:13.183802 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:13.183836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:13.198808 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:13.198840 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:13.272736 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:13.272764 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:13.272783 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:15.870196 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:15.887480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:15.887551 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:15.923871 1157708 cri.go:89] found id: ""
	I0318 13:53:15.923899 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.923907 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:15.923913 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:15.923976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:15.963870 1157708 cri.go:89] found id: ""
	I0318 13:53:15.963906 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.963917 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:15.963925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:15.963997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:16.009781 1157708 cri.go:89] found id: ""
	I0318 13:53:16.009815 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.009828 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:16.009837 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:16.009905 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:16.047673 1157708 cri.go:89] found id: ""
	I0318 13:53:16.047708 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.047718 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:16.047727 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:16.047793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:16.089419 1157708 cri.go:89] found id: ""
	I0318 13:53:16.089447 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.089455 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:16.089461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:16.089511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:16.133563 1157708 cri.go:89] found id: ""
	I0318 13:53:16.133594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.133604 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:16.133611 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:16.133685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:16.174369 1157708 cri.go:89] found id: ""
	I0318 13:53:16.174404 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.174415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:16.174423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:16.174491 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:16.219334 1157708 cri.go:89] found id: ""
	I0318 13:53:16.219360 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.219367 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:16.219376 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:16.219389 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:16.273468 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:16.273507 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:16.288584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:16.288612 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:16.366575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:16.366602 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:16.366620 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:16.451031 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:16.451071 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:18.997536 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:19.014995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:19.015065 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:19.064686 1157708 cri.go:89] found id: ""
	I0318 13:53:19.064719 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.064731 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:19.064739 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:19.064793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:19.110598 1157708 cri.go:89] found id: ""
	I0318 13:53:19.110629 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.110640 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:19.110648 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:19.110739 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:19.156628 1157708 cri.go:89] found id: ""
	I0318 13:53:19.156652 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.156660 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:19.156668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:19.156730 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:19.205993 1157708 cri.go:89] found id: ""
	I0318 13:53:19.206029 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.206042 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:19.206049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:19.206118 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:19.253902 1157708 cri.go:89] found id: ""
	I0318 13:53:19.253935 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.253952 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:19.253960 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:19.254036 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:19.296550 1157708 cri.go:89] found id: ""
	I0318 13:53:19.296583 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.296594 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:19.296602 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:19.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:19.337316 1157708 cri.go:89] found id: ""
	I0318 13:53:19.337349 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.337360 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:19.337369 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:19.337446 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:19.381503 1157708 cri.go:89] found id: ""
	I0318 13:53:19.381546 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.381565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:19.381579 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:19.381603 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:19.461665 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:19.461691 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:19.461707 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:19.548291 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:19.548348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:19.591296 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:19.591335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:19.648740 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:19.648776 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.164970 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:22.180740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:22.180806 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:22.223787 1157708 cri.go:89] found id: ""
	I0318 13:53:22.223820 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.223833 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:22.223840 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:22.223908 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:22.266751 1157708 cri.go:89] found id: ""
	I0318 13:53:22.266785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.266797 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:22.266805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:22.266876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:22.311669 1157708 cri.go:89] found id: ""
	I0318 13:53:22.311701 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.311712 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:22.311721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:22.311816 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:22.354687 1157708 cri.go:89] found id: ""
	I0318 13:53:22.354722 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.354733 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:22.354742 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:22.354807 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:22.395741 1157708 cri.go:89] found id: ""
	I0318 13:53:22.395767 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.395776 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:22.395782 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:22.395832 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:22.434506 1157708 cri.go:89] found id: ""
	I0318 13:53:22.434539 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.434550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:22.434559 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:22.434612 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:22.474583 1157708 cri.go:89] found id: ""
	I0318 13:53:22.474612 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.474621 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:22.474627 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:22.474690 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:22.521898 1157708 cri.go:89] found id: ""
	I0318 13:53:22.521943 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.521955 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:22.521968 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:22.521989 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.537679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:22.537711 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:22.619575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:22.619605 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:22.619621 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:22.704206 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:22.704265 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:22.753470 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:22.753502 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:25.311578 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:25.329917 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:25.329979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:25.373784 1157708 cri.go:89] found id: ""
	I0318 13:53:25.373818 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.373826 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:25.373833 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:25.373901 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:25.422490 1157708 cri.go:89] found id: ""
	I0318 13:53:25.422516 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.422526 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:25.422532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:25.422597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:25.459523 1157708 cri.go:89] found id: ""
	I0318 13:53:25.459552 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.459560 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:25.459567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:25.459627 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:25.495647 1157708 cri.go:89] found id: ""
	I0318 13:53:25.495683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.495695 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:25.495702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:25.495772 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:25.534582 1157708 cri.go:89] found id: ""
	I0318 13:53:25.534617 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.534626 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:25.534632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:25.534704 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:25.577526 1157708 cri.go:89] found id: ""
	I0318 13:53:25.577558 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.577566 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:25.577573 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:25.577687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:25.616403 1157708 cri.go:89] found id: ""
	I0318 13:53:25.616433 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.616445 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:25.616453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:25.616527 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:25.660444 1157708 cri.go:89] found id: ""
	I0318 13:53:25.660474 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.660482 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:25.660492 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:25.660506 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:25.715595 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:25.715641 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:25.730358 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:25.730390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:25.803153 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:25.803239 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:25.803261 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:25.885339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:25.885388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:28.433506 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:28.449402 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:28.449481 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:28.490972 1157708 cri.go:89] found id: ""
	I0318 13:53:28.491007 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.491019 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:28.491028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:28.491094 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:28.531406 1157708 cri.go:89] found id: ""
	I0318 13:53:28.531439 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.531451 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:28.531460 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:28.531513 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:28.570299 1157708 cri.go:89] found id: ""
	I0318 13:53:28.570334 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.570345 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:28.570352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:28.570408 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:28.607950 1157708 cri.go:89] found id: ""
	I0318 13:53:28.607979 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.607987 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:28.607994 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:28.608066 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:28.648710 1157708 cri.go:89] found id: ""
	I0318 13:53:28.648744 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.648755 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:28.648762 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:28.648830 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:28.691071 1157708 cri.go:89] found id: ""
	I0318 13:53:28.691102 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.691114 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:28.691122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:28.691183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:28.734399 1157708 cri.go:89] found id: ""
	I0318 13:53:28.734438 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.734452 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:28.734461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:28.734548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:28.774859 1157708 cri.go:89] found id: ""
	I0318 13:53:28.774891 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.774902 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:28.774912 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:28.774927 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:28.831420 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:28.831459 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:28.847970 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:28.848008 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:28.926007 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:28.926034 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:28.926051 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:29.007525 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:29.007577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:31.555401 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:31.570964 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:31.571046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:31.611400 1157708 cri.go:89] found id: ""
	I0318 13:53:31.611427 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.611438 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:31.611445 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:31.611510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:31.654572 1157708 cri.go:89] found id: ""
	I0318 13:53:31.654602 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.654614 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:31.654622 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:31.654725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:31.692649 1157708 cri.go:89] found id: ""
	I0318 13:53:31.692673 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.692681 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:31.692686 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:31.692748 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:31.732208 1157708 cri.go:89] found id: ""
	I0318 13:53:31.732233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.732244 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:31.732253 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:31.732320 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:31.774132 1157708 cri.go:89] found id: ""
	I0318 13:53:31.774163 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.774172 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:31.774178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:31.774234 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:31.813558 1157708 cri.go:89] found id: ""
	I0318 13:53:31.813582 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.813590 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:31.813597 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:31.813651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:31.862024 1157708 cri.go:89] found id: ""
	I0318 13:53:31.862057 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.862070 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:31.862077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:31.862146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:31.903941 1157708 cri.go:89] found id: ""
	I0318 13:53:31.903972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.903982 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:31.903992 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:31.904006 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:31.957327 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:31.957366 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:31.973337 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:31.973380 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:32.053702 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:32.053730 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:32.053744 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:32.134859 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:32.134911 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:34.683335 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:34.700383 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:34.700490 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:34.744387 1157708 cri.go:89] found id: ""
	I0318 13:53:34.744420 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.744432 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:34.744441 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:34.744509 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:34.788122 1157708 cri.go:89] found id: ""
	I0318 13:53:34.788150 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.788160 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:34.788166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:34.788221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:34.834760 1157708 cri.go:89] found id: ""
	I0318 13:53:34.834795 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.834808 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:34.834817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:34.834894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:34.882028 1157708 cri.go:89] found id: ""
	I0318 13:53:34.882062 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.882073 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:34.882081 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:34.882150 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:34.933339 1157708 cri.go:89] found id: ""
	I0318 13:53:34.933364 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.933374 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:34.933384 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:34.933451 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:34.972362 1157708 cri.go:89] found id: ""
	I0318 13:53:34.972395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.972407 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:34.972416 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:34.972486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:35.008949 1157708 cri.go:89] found id: ""
	I0318 13:53:35.008986 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.008999 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:35.009007 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:35.009080 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:35.054698 1157708 cri.go:89] found id: ""
	I0318 13:53:35.054733 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.054742 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:35.054756 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:35.054770 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:35.109391 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:35.109450 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:35.126785 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:35.126818 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:35.214303 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:35.214329 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:35.214342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:35.298705 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:35.298750 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:37.843701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:37.859330 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:37.859415 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:37.903428 1157708 cri.go:89] found id: ""
	I0318 13:53:37.903466 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.903479 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:37.903497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:37.903560 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:37.943687 1157708 cri.go:89] found id: ""
	I0318 13:53:37.943716 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.943727 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:37.943735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:37.943804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:37.986201 1157708 cri.go:89] found id: ""
	I0318 13:53:37.986233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.986244 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:37.986252 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:37.986322 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:38.026776 1157708 cri.go:89] found id: ""
	I0318 13:53:38.026813 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.026825 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:38.026832 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:38.026907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:38.073057 1157708 cri.go:89] found id: ""
	I0318 13:53:38.073088 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.073098 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:38.073105 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:38.073172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:38.110576 1157708 cri.go:89] found id: ""
	I0318 13:53:38.110611 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.110624 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:38.110632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:38.110702 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:38.154293 1157708 cri.go:89] found id: ""
	I0318 13:53:38.154319 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.154327 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:38.154338 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:38.154414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:38.195407 1157708 cri.go:89] found id: ""
	I0318 13:53:38.195434 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.195444 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:38.195454 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:38.195469 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:38.254159 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:38.254210 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:38.269143 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:38.269175 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:38.349819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:38.349845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:38.349864 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:38.435121 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:38.435164 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.982438 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:40.998483 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:40.998559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:41.037470 1157708 cri.go:89] found id: ""
	I0318 13:53:41.037497 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.037506 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:41.037512 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:41.037583 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:41.078428 1157708 cri.go:89] found id: ""
	I0318 13:53:41.078463 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.078473 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:41.078482 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:41.078548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:41.121342 1157708 cri.go:89] found id: ""
	I0318 13:53:41.121371 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.121382 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:41.121391 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:41.121482 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:41.164124 1157708 cri.go:89] found id: ""
	I0318 13:53:41.164149 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.164159 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:41.164167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:41.164229 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:41.210294 1157708 cri.go:89] found id: ""
	I0318 13:53:41.210321 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.210329 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:41.210336 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:41.210407 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:41.253934 1157708 cri.go:89] found id: ""
	I0318 13:53:41.253957 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.253967 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:41.253973 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:41.254039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:41.298817 1157708 cri.go:89] found id: ""
	I0318 13:53:41.298849 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.298861 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:41.298870 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:41.298936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:41.344109 1157708 cri.go:89] found id: ""
	I0318 13:53:41.344137 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.344146 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:41.344156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:41.344170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:41.401026 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:41.401061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:41.416197 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:41.416229 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:41.495349 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:41.495375 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:41.495393 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:41.578201 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:41.578253 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:44.126601 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:44.140971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:44.141048 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:44.184758 1157708 cri.go:89] found id: ""
	I0318 13:53:44.184786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.184794 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:44.184801 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:44.184851 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:44.230793 1157708 cri.go:89] found id: ""
	I0318 13:53:44.230824 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.230836 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:44.230842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:44.230916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:44.269561 1157708 cri.go:89] found id: ""
	I0318 13:53:44.269594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.269606 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:44.269614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:44.269680 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:44.310847 1157708 cri.go:89] found id: ""
	I0318 13:53:44.310878 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.310889 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:44.310898 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:44.310970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:44.350827 1157708 cri.go:89] found id: ""
	I0318 13:53:44.350860 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.350878 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:44.350887 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:44.350956 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:44.389693 1157708 cri.go:89] found id: ""
	I0318 13:53:44.389721 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.389730 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:44.389735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:44.389804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:44.429254 1157708 cri.go:89] found id: ""
	I0318 13:53:44.429280 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.429289 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:44.429303 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:44.429354 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:44.468484 1157708 cri.go:89] found id: ""
	I0318 13:53:44.468513 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.468525 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:44.468538 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:44.468555 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:44.525012 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:44.525058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:44.541638 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:44.541668 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:44.621779 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:44.621801 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:44.621814 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:44.706797 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:44.706884 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:47.253569 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:47.268808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:47.268888 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:47.313191 1157708 cri.go:89] found id: ""
	I0318 13:53:47.313220 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.313232 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:47.313240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:47.313307 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:47.357567 1157708 cri.go:89] found id: ""
	I0318 13:53:47.357600 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.357611 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:47.357619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:47.357688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:47.392300 1157708 cri.go:89] found id: ""
	I0318 13:53:47.392341 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.392352 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:47.392366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:47.392437 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:47.432800 1157708 cri.go:89] found id: ""
	I0318 13:53:47.432830 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.432842 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:47.432857 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:47.432921 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:47.469563 1157708 cri.go:89] found id: ""
	I0318 13:53:47.469591 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.469599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:47.469605 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:47.469668 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:47.508770 1157708 cri.go:89] found id: ""
	I0318 13:53:47.508799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.508810 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:47.508820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:47.508880 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:47.549876 1157708 cri.go:89] found id: ""
	I0318 13:53:47.549909 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.549921 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:47.549930 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:47.549997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:47.591385 1157708 cri.go:89] found id: ""
	I0318 13:53:47.591413 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.591421 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:47.591431 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:47.591446 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:47.646284 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:47.646313 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:47.662609 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:47.662639 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:47.737371 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:47.737398 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:47.737415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:47.817311 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:47.817342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:50.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:50.380029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:50.380109 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:50.427452 1157708 cri.go:89] found id: ""
	I0318 13:53:50.427484 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.427496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:50.427505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:50.427579 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:50.466766 1157708 cri.go:89] found id: ""
	I0318 13:53:50.466793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.466801 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:50.466808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:50.466894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:50.506768 1157708 cri.go:89] found id: ""
	I0318 13:53:50.506799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.506811 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:50.506819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:50.506882 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:50.545554 1157708 cri.go:89] found id: ""
	I0318 13:53:50.545592 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.545605 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:50.545613 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:50.545685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:50.583949 1157708 cri.go:89] found id: ""
	I0318 13:53:50.583984 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.583995 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:50.584004 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:50.584083 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:50.624730 1157708 cri.go:89] found id: ""
	I0318 13:53:50.624763 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.624774 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:50.624783 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:50.624853 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:50.664300 1157708 cri.go:89] found id: ""
	I0318 13:53:50.664346 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.664358 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:50.664366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:50.664420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:50.702760 1157708 cri.go:89] found id: ""
	I0318 13:53:50.702793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.702805 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:50.702817 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:50.702833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:50.757188 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:50.757237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:50.772151 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:50.772195 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:50.856872 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:50.856898 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:50.856917 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:50.937706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:50.937749 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:53.481836 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:53.497792 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:53.497856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:53.535376 1157708 cri.go:89] found id: ""
	I0318 13:53:53.535411 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.535420 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:53.535427 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:53.535486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:53.575002 1157708 cri.go:89] found id: ""
	I0318 13:53:53.575030 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.575042 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:53.575050 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:53.575119 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:53.615880 1157708 cri.go:89] found id: ""
	I0318 13:53:53.615919 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.615931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:53.615940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:53.616007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:53.681746 1157708 cri.go:89] found id: ""
	I0318 13:53:53.681786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.681799 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:53.681810 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:53.681887 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:53.725219 1157708 cri.go:89] found id: ""
	I0318 13:53:53.725241 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.725250 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:53.725256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:53.725317 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:53.766969 1157708 cri.go:89] found id: ""
	I0318 13:53:53.767006 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.767018 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:53.767026 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:53.767091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:53.802103 1157708 cri.go:89] found id: ""
	I0318 13:53:53.802134 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.802145 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:53.802157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:53.802210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:53.843054 1157708 cri.go:89] found id: ""
	I0318 13:53:53.843085 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.843093 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:53.843103 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:53.843117 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:53.899794 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:53.899836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:53.915559 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:53.915592 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:53.996410 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:53.996438 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:53.996456 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:54.085588 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:54.085628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:56.632201 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:56.648183 1157708 kubeadm.go:591] duration metric: took 4m3.550073086s to restartPrimaryControlPlane
	W0318 13:53:56.648381 1157708 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:53:56.648422 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:53:59.666187 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.017736279s)
	I0318 13:53:59.666270 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:53:59.682887 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:53:59.694626 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:53:59.706577 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:53:59.706599 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:53:59.706648 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:53:59.718311 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:53:59.718371 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:53:59.729298 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:53:59.741351 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:53:59.741401 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:53:59.753652 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.765642 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:53:59.765695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.778055 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:53:59.789994 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:53:59.790042 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:53:59.801292 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:53:59.879414 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:53:59.879516 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:00.046477 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:00.046660 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:00.046819 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:00.257070 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:00.259191 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:00.259333 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:00.259434 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:00.259549 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:00.259658 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:00.259782 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:00.259857 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:00.259949 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:00.260033 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:00.260136 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:00.260244 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:00.260299 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:00.260394 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:00.423400 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:00.543983 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:00.796108 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:00.901121 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:00.918891 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:00.920502 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:00.920642 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:01.094176 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:54:01.096397 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:54:01.096539 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:01.107816 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:01.108753 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:01.109641 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:01.111913 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:41.111826 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:54:41.111977 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:41.112236 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:46.112502 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:46.112797 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:56.112956 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:56.113210 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:16.113672 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:16.113963 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:56.115397 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:56.115674 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:56.115714 1157708 kubeadm.go:309] 
	I0318 13:55:56.115782 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:55:56.115840 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:55:56.115849 1157708 kubeadm.go:309] 
	I0318 13:55:56.115908 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:55:56.115979 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:55:56.116102 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:55:56.116112 1157708 kubeadm.go:309] 
	I0318 13:55:56.116242 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:55:56.116289 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:55:56.116349 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:55:56.116370 1157708 kubeadm.go:309] 
	I0318 13:55:56.116506 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:55:56.116645 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:55:56.116665 1157708 kubeadm.go:309] 
	I0318 13:55:56.116804 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:55:56.116897 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:55:56.117005 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:55:56.117094 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:55:56.117110 1157708 kubeadm.go:309] 
	I0318 13:55:56.117680 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:56.117813 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:55:56.117934 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 13:55:56.118052 1157708 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 13:55:56.118124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:55:57.920938 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.802776126s)
	I0318 13:55:57.921031 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:57.939226 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:57.952304 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:57.952342 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:57.952404 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:57.964632 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:57.964695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:57.977306 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:57.989728 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:57.989790 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:58.001661 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.013078 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:58.013160 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.024891 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:58.036171 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:58.036225 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:58.048156 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:58.128356 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:55:58.128445 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:58.297704 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:58.297897 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:58.298048 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:58.515521 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:58.517569 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:58.517679 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:58.517760 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:58.517830 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:58.517908 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:58.517980 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:58.518047 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:58.518280 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:58.519078 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:58.520081 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:58.521268 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:58.521861 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:58.521936 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:58.762418 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:58.999746 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:59.214448 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:59.402662 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:59.421555 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:59.423151 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:59.423233 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:59.560412 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:59.563125 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:55:59.563274 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:59.571364 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:59.572936 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:59.573987 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:59.586689 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:56:39.588627 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:56:39.588942 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:39.589128 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:44.589564 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:44.589852 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:54.590311 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:54.590619 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:14.591571 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:14.591866 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594170 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:54.594433 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594448 1157708 kubeadm.go:309] 
	I0318 13:57:54.594490 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:57:54.594540 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:57:54.594549 1157708 kubeadm.go:309] 
	I0318 13:57:54.594594 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:57:54.594641 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:57:54.594800 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:57:54.594811 1157708 kubeadm.go:309] 
	I0318 13:57:54.594950 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:57:54.595000 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:57:54.595046 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:57:54.595056 1157708 kubeadm.go:309] 
	I0318 13:57:54.595163 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:57:54.595297 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:57:54.595312 1157708 kubeadm.go:309] 
	I0318 13:57:54.595471 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:57:54.595605 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:57:54.595716 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:57:54.595812 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:57:54.595827 1157708 kubeadm.go:309] 
	I0318 13:57:54.596636 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:57:54.596805 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:57:54.596972 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 13:57:54.597014 1157708 kubeadm.go:393] duration metric: took 8m1.551231902s to StartCluster
	I0318 13:57:54.597076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:57:54.597174 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:57:54.649451 1157708 cri.go:89] found id: ""
	I0318 13:57:54.649484 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.649496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:57:54.649506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:57:54.649577 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:57:54.692278 1157708 cri.go:89] found id: ""
	I0318 13:57:54.692317 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.692339 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:57:54.692349 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:57:54.692427 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:57:54.731034 1157708 cri.go:89] found id: ""
	I0318 13:57:54.731062 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.731071 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:57:54.731077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:57:54.731135 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:57:54.769883 1157708 cri.go:89] found id: ""
	I0318 13:57:54.769913 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.769923 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:57:54.769931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:57:54.769996 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:57:54.808620 1157708 cri.go:89] found id: ""
	I0318 13:57:54.808648 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.808656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:57:54.808661 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:57:54.808715 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:57:54.849207 1157708 cri.go:89] found id: ""
	I0318 13:57:54.849245 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.849256 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:57:54.849264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:57:54.849334 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:57:54.918479 1157708 cri.go:89] found id: ""
	I0318 13:57:54.918508 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.918520 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:57:54.918528 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:57:54.918597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:57:54.958828 1157708 cri.go:89] found id: ""
	I0318 13:57:54.958861 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.958871 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:57:54.958887 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:57:54.958906 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:57:55.078045 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:57:55.078092 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:57:55.123043 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:57:55.123077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:57:55.180480 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:57:55.180518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:57:55.197264 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:57:55.197316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:57:55.291264 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0318 13:57:55.291325 1157708 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 13:57:55.291395 1157708 out.go:239] * 
	* 
	W0318 13:57:55.291477 1157708 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.291502 1157708 out.go:239] * 
	* 
	W0318 13:57:55.292511 1157708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:55.295566 1157708 out.go:177] 
	W0318 13:57:55.296840 1157708 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.296903 1157708 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 13:57:55.296941 1157708 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 13:57:55.298417 1157708 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-909137 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 2 (259.335023ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-909137 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-909137 logs -n 25: (1.555329908s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-599578                           | kubernetes-upgrade-599578    | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:39 UTC |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-760389                                        | pause-760389                 | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:40 UTC |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-173866 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | disable-driver-mounts-173866                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:42 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-173036            | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-537236             | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC | 18 Mar 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-569210  | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC | 18 Mar 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-909137        | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-173036                 | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-537236                  | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-909137             | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-569210       | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:55 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:45:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:45:41.667747 1157887 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:45:41.667937 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.667952 1157887 out.go:304] Setting ErrFile to fd 2...
	I0318 13:45:41.667958 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.668616 1157887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:45:41.669251 1157887 out.go:298] Setting JSON to false
	I0318 13:45:41.670283 1157887 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19689,"bootTime":1710749853,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:45:41.670349 1157887 start.go:139] virtualization: kvm guest
	I0318 13:45:41.672702 1157887 out.go:177] * [default-k8s-diff-port-569210] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:45:41.674325 1157887 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:45:41.674336 1157887 notify.go:220] Checking for updates...
	I0318 13:45:41.675874 1157887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:45:41.677543 1157887 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:45:41.679053 1157887 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:45:41.680344 1157887 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:45:41.681702 1157887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:45:41.683304 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:45:41.683743 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.683792 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.698719 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0318 13:45:41.699154 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.699657 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.699676 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.699995 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.700168 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.700488 1157887 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:45:41.700763 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.700803 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.715824 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0318 13:45:41.716270 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.716688 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.716708 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.717004 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.717185 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.747564 1157887 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:45:41.748930 1157887 start.go:297] selected driver: kvm2
	I0318 13:45:41.748944 1157887 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.749059 1157887 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:45:41.749725 1157887 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.749819 1157887 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:45:41.764225 1157887 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:45:41.764607 1157887 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:45:41.764679 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:45:41.764692 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:45:41.764727 1157887 start.go:340] cluster config:
	{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.764824 1157887 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.766561 1157887 out.go:177] * Starting "default-k8s-diff-port-569210" primary control-plane node in "default-k8s-diff-port-569210" cluster
	I0318 13:45:40.044635 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:41.767747 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:45:41.767779 1157887 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:45:41.767799 1157887 cache.go:56] Caching tarball of preloaded images
	I0318 13:45:41.767876 1157887 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:45:41.767887 1157887 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:45:41.767986 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:45:41.768151 1157887 start.go:360] acquireMachinesLock for default-k8s-diff-port-569210: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:45:46.124607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:49.196561 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:55.276657 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:58.348606 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:04.428632 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:07.500592 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:13.584558 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:16.652578 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:22.732573 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:25.804745 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:31.884579 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:34.956708 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:41.036614 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:44.108576 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:50.188610 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:53.260646 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:59.340724 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:02.412698 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:08.492603 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:11.564634 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:17.644618 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:20.716642 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:26.796585 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:29.868690 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:35.948613 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:39.020607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:45.104563 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:48.172547 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:54.252608 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:57.324659 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:03.404600 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:06.476647 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:12.556609 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:15.628640 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:21.708597 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:24.780572 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:30.860662 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:33.932528 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:40.012616 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:43.084569 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:49.164622 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:52.236652 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:58.316619 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:49:01.321139 1157416 start.go:364] duration metric: took 4m21.279664055s to acquireMachinesLock for "no-preload-537236"
	I0318 13:49:01.321252 1157416 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:01.321260 1157416 fix.go:54] fixHost starting: 
	I0318 13:49:01.321627 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:01.321658 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:01.337337 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0318 13:49:01.337793 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:01.338235 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:49:01.338262 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:01.338703 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:01.338892 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:01.339025 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:49:01.340630 1157416 fix.go:112] recreateIfNeeded on no-preload-537236: state=Stopped err=<nil>
	I0318 13:49:01.340653 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	W0318 13:49:01.340785 1157416 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:01.342565 1157416 out.go:177] * Restarting existing kvm2 VM for "no-preload-537236" ...
	I0318 13:49:01.318340 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:01.318378 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.318795 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:49:01.318829 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.319041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:49:01.321007 1157263 machine.go:97] duration metric: took 4m37.382603693s to provisionDockerMachine
	I0318 13:49:01.321051 1157263 fix.go:56] duration metric: took 4m37.403420427s for fixHost
	I0318 13:49:01.321064 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 4m37.403446357s
	W0318 13:49:01.321088 1157263 start.go:713] error starting host: provision: host is not running
	W0318 13:49:01.321225 1157263 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 13:49:01.321242 1157263 start.go:728] Will try again in 5 seconds ...
	I0318 13:49:01.343844 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Start
	I0318 13:49:01.344003 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring networks are active...
	I0318 13:49:01.344698 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network default is active
	I0318 13:49:01.345062 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network mk-no-preload-537236 is active
	I0318 13:49:01.345378 1157416 main.go:141] libmachine: (no-preload-537236) Getting domain xml...
	I0318 13:49:01.346073 1157416 main.go:141] libmachine: (no-preload-537236) Creating domain...
	I0318 13:49:02.522163 1157416 main.go:141] libmachine: (no-preload-537236) Waiting to get IP...
	I0318 13:49:02.522935 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.523347 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.523420 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.523327 1158392 retry.go:31] will retry after 276.248352ms: waiting for machine to come up
	I0318 13:49:02.800962 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.801439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.801472 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.801381 1158392 retry.go:31] will retry after 318.94167ms: waiting for machine to come up
	I0318 13:49:03.121895 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.122276 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.122298 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.122254 1158392 retry.go:31] will retry after 353.742872ms: waiting for machine to come up
	I0318 13:49:03.477885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.478401 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.478439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.478360 1158392 retry.go:31] will retry after 481.537084ms: waiting for machine to come up
	I0318 13:49:03.960991 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.961432 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.961505 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.961416 1158392 retry.go:31] will retry after 647.244695ms: waiting for machine to come up
	I0318 13:49:04.610150 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:04.610563 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:04.610604 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:04.610512 1158392 retry.go:31] will retry after 577.22264ms: waiting for machine to come up
	I0318 13:49:06.321404 1157263 start.go:360] acquireMachinesLock for embed-certs-173036: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:49:05.189300 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:05.189688 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:05.189722 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:05.189635 1158392 retry.go:31] will retry after 1.064347528s: waiting for machine to come up
	I0318 13:49:06.255734 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:06.256071 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:06.256103 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:06.256016 1158392 retry.go:31] will retry after 1.359025709s: waiting for machine to come up
	I0318 13:49:07.616847 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:07.617313 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:07.617338 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:07.617265 1158392 retry.go:31] will retry after 1.844112s: waiting for machine to come up
	I0318 13:49:09.464239 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:09.464761 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:09.464788 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:09.464703 1158392 retry.go:31] will retry after 1.984375986s: waiting for machine to come up
	I0318 13:49:11.450609 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:11.451100 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:11.451153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:11.451037 1158392 retry.go:31] will retry after 1.944733714s: waiting for machine to come up
	I0318 13:49:13.397815 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:13.398238 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:13.398265 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:13.398190 1158392 retry.go:31] will retry after 2.44494826s: waiting for machine to come up
	I0318 13:49:15.845711 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:15.846169 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:15.846212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:15.846128 1158392 retry.go:31] will retry after 2.760857339s: waiting for machine to come up
	I0318 13:49:18.609516 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:18.609917 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:18.609942 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:18.609872 1158392 retry.go:31] will retry after 3.501792324s: waiting for machine to come up
	I0318 13:49:23.501689 1157708 start.go:364] duration metric: took 4m10.403284517s to acquireMachinesLock for "old-k8s-version-909137"
	I0318 13:49:23.501769 1157708 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:23.501783 1157708 fix.go:54] fixHost starting: 
	I0318 13:49:23.502238 1157708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:23.502279 1157708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:23.520223 1157708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0318 13:49:23.520696 1157708 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:23.521273 1157708 main.go:141] libmachine: Using API Version  1
	I0318 13:49:23.521304 1157708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:23.521693 1157708 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:23.521934 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:23.522089 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetState
	I0318 13:49:23.523696 1157708 fix.go:112] recreateIfNeeded on old-k8s-version-909137: state=Stopped err=<nil>
	I0318 13:49:23.523738 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	W0318 13:49:23.523894 1157708 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:23.526253 1157708 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-909137" ...
	I0318 13:49:22.113291 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.113733 1157416 main.go:141] libmachine: (no-preload-537236) Found IP for machine: 192.168.39.7
	I0318 13:49:22.113753 1157416 main.go:141] libmachine: (no-preload-537236) Reserving static IP address...
	I0318 13:49:22.113787 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has current primary IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.114159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.114179 1157416 main.go:141] libmachine: (no-preload-537236) DBG | skip adding static IP to network mk-no-preload-537236 - found existing host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"}
	I0318 13:49:22.114192 1157416 main.go:141] libmachine: (no-preload-537236) Reserved static IP address: 192.168.39.7
	I0318 13:49:22.114201 1157416 main.go:141] libmachine: (no-preload-537236) Waiting for SSH to be available...
	I0318 13:49:22.114208 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Getting to WaitForSSH function...
	I0318 13:49:22.116603 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.116944 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.116971 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.117082 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH client type: external
	I0318 13:49:22.117153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa (-rw-------)
	I0318 13:49:22.117192 1157416 main.go:141] libmachine: (no-preload-537236) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:22.117212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | About to run SSH command:
	I0318 13:49:22.117236 1157416 main.go:141] libmachine: (no-preload-537236) DBG | exit 0
	I0318 13:49:22.240543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:22.240913 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetConfigRaw
	I0318 13:49:22.241611 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.244016 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244273 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.244302 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244506 1157416 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/config.json ...
	I0318 13:49:22.244729 1157416 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:22.244750 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:22.244947 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.246869 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247160 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.247198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247246 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.247401 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247546 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247722 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.247893 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.248160 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.248174 1157416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:22.353134 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:22.353164 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353435 1157416 buildroot.go:166] provisioning hostname "no-preload-537236"
	I0318 13:49:22.353463 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353636 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.356058 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356463 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.356491 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356645 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.356846 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.356965 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.357068 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.357201 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.357415 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.357434 1157416 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-537236 && echo "no-preload-537236" | sudo tee /etc/hostname
	I0318 13:49:22.477651 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-537236
	
	I0318 13:49:22.477692 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.480537 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.480876 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.480905 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.481135 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.481342 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481520 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481676 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.481887 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.482066 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.482082 1157416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-537236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-537236/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-537236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:22.599489 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:22.599566 1157416 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:22.599596 1157416 buildroot.go:174] setting up certificates
	I0318 13:49:22.599609 1157416 provision.go:84] configureAuth start
	I0318 13:49:22.599624 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.599981 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.602425 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602800 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.602831 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602986 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.605036 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605331 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.605356 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605500 1157416 provision.go:143] copyHostCerts
	I0318 13:49:22.605589 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:22.605600 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:22.605665 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:22.605786 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:22.605795 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:22.605820 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:22.605895 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:22.605904 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:22.605927 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:22.606003 1157416 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.no-preload-537236 san=[127.0.0.1 192.168.39.7 localhost minikube no-preload-537236]
	I0318 13:49:22.810156 1157416 provision.go:177] copyRemoteCerts
	I0318 13:49:22.810249 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:22.810283 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.813018 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813343 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.813376 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813557 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.813743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.813890 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.814080 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:22.898886 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:22.926296 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 13:49:22.953260 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:49:22.981248 1157416 provision.go:87] duration metric: took 381.624842ms to configureAuth
	I0318 13:49:22.981281 1157416 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:22.981459 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:49:22.981573 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.984446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.984848 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.984885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.985061 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.985269 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985405 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985595 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.985728 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.985911 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.985925 1157416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:23.259439 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:23.259470 1157416 machine.go:97] duration metric: took 1.014725867s to provisionDockerMachine
	I0318 13:49:23.259483 1157416 start.go:293] postStartSetup for "no-preload-537236" (driver="kvm2")
	I0318 13:49:23.259518 1157416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:23.259553 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.259937 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:23.259976 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.262875 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263196 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.263228 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263403 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.263684 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.263861 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.264029 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.348815 1157416 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:23.353550 1157416 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:23.353582 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:23.353659 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:23.353759 1157416 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:23.353885 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:23.364831 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:23.391345 1157416 start.go:296] duration metric: took 131.846395ms for postStartSetup
	I0318 13:49:23.391396 1157416 fix.go:56] duration metric: took 22.070135111s for fixHost
	I0318 13:49:23.391423 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.394229 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.394583 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394685 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.394937 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395111 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395266 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.395433 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:23.395619 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:23.395631 1157416 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:23.501504 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769763.449975975
	
	I0318 13:49:23.501532 1157416 fix.go:216] guest clock: 1710769763.449975975
	I0318 13:49:23.501542 1157416 fix.go:229] Guest: 2024-03-18 13:49:23.449975975 +0000 UTC Remote: 2024-03-18 13:49:23.39140181 +0000 UTC m=+283.498114537 (delta=58.574165ms)
	I0318 13:49:23.501564 1157416 fix.go:200] guest clock delta is within tolerance: 58.574165ms
	I0318 13:49:23.501584 1157416 start.go:83] releasing machines lock for "no-preload-537236", held for 22.180386627s
	I0318 13:49:23.501612 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.501900 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:23.504693 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505130 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.505159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505331 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.505889 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506092 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506198 1157416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:23.506252 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.506317 1157416 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:23.506351 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.509104 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509414 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509465 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509625 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.509819 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509839 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509853 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510043 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510103 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.510207 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.510261 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510394 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510541 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.616831 1157416 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:23.624184 1157416 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:23.779709 1157416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:23.786535 1157416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:23.786594 1157416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:23.805716 1157416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:23.805743 1157416 start.go:494] detecting cgroup driver to use...
	I0318 13:49:23.805850 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:23.825572 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:23.842762 1157416 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:23.842817 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:23.859385 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:23.876416 1157416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:24.005995 1157416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:24.193107 1157416 docker.go:233] disabling docker service ...
	I0318 13:49:24.193173 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:24.212825 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:24.230448 1157416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:24.385445 1157416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:24.548640 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:24.564678 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:24.592528 1157416 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:49:24.592601 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.604303 1157416 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:24.604394 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.616123 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.627956 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.639194 1157416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:24.650789 1157416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:24.661390 1157416 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:24.661443 1157416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:24.677180 1157416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:24.687973 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:24.827386 1157416 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:24.978805 1157416 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:24.978898 1157416 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:24.985647 1157416 start.go:562] Will wait 60s for crictl version
	I0318 13:49:24.985735 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:24.990325 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:25.038948 1157416 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:25.039020 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.068855 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.107104 1157416 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 13:49:23.527811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .Start
	I0318 13:49:23.528000 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring networks are active...
	I0318 13:49:23.528714 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network default is active
	I0318 13:49:23.529036 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network mk-old-k8s-version-909137 is active
	I0318 13:49:23.529491 1157708 main.go:141] libmachine: (old-k8s-version-909137) Getting domain xml...
	I0318 13:49:23.530324 1157708 main.go:141] libmachine: (old-k8s-version-909137) Creating domain...
	I0318 13:49:24.765648 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting to get IP...
	I0318 13:49:24.766664 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:24.767122 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:24.767182 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:24.767081 1158507 retry.go:31] will retry after 250.785143ms: waiting for machine to come up
	I0318 13:49:25.019755 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.020238 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.020273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.020185 1158507 retry.go:31] will retry after 346.894257ms: waiting for machine to come up
	I0318 13:49:25.368815 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.369335 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.369372 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.369268 1158507 retry.go:31] will retry after 367.316359ms: waiting for machine to come up
	I0318 13:49:25.737835 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.738404 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.738438 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.738337 1158507 retry.go:31] will retry after 479.291041ms: waiting for machine to come up
	I0318 13:49:26.219103 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.219568 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.219599 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.219523 1158507 retry.go:31] will retry after 552.309382ms: waiting for machine to come up
	I0318 13:49:26.773363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.773905 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.773935 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.773857 1158507 retry.go:31] will retry after 703.087388ms: waiting for machine to come up
	I0318 13:49:27.478730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:27.479330 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:27.479363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:27.479270 1158507 retry.go:31] will retry after 1.136606935s: waiting for machine to come up
	I0318 13:49:25.108504 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:25.111416 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.111795 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:25.111827 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.112035 1157416 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:25.116688 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:25.131526 1157416 kubeadm.go:877] updating cluster {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:25.131663 1157416 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 13:49:25.131698 1157416 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:25.176340 1157416 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 13:49:25.176378 1157416 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:25.176474 1157416 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.176487 1157416 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.176524 1157416 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.176537 1157416 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.176592 1157416 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.176619 1157416 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.176773 1157416 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 13:49:25.176789 1157416 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178485 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.178486 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.178488 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.178480 1157416 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.178540 1157416 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 13:49:25.178911 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334172 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334873 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 13:49:25.338330 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.338825 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.340192 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.350053 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.356621 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.472528 1157416 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 13:49:25.472571 1157416 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.472627 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.630923 1157416 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 13:49:25.630996 1157416 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.631001 1157416 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 13:49:25.631042 1157416 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.630933 1157416 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 13:49:25.631089 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631102 1157416 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 13:49:25.631134 1157416 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.631107 1157416 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.631169 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631183 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631052 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631199 1157416 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 13:49:25.631220 1157416 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.631233 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.631264 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.642598 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.708001 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.708026 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708068 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.708003 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.708129 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708162 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.708225 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.708286 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.790492 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.790623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.804436 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804465 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804503 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 13:49:25.804532 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804583 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:25.804657 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 13:49:25.804684 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804720 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804768 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804801 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:25.807681 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 13:49:26.162719 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.887846 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.083277557s)
	I0318 13:49:27.887882 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.083274384s)
	I0318 13:49:27.887894 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 13:49:27.887916 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 13:49:27.887927 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.887944 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.083121634s)
	I0318 13:49:27.887971 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 13:49:27.887971 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.083181595s)
	I0318 13:49:27.887990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 13:49:27.888003 1157416 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.725256044s)
	I0318 13:49:27.888008 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.888040 1157416 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 13:49:27.888080 1157416 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.888114 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:27.893415 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:28.617273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:28.617711 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:28.617740 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:28.617665 1158507 retry.go:31] will retry after 947.818334ms: waiting for machine to come up
	I0318 13:49:29.566814 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:29.567157 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:29.567177 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:29.567121 1158507 retry.go:31] will retry after 1.328243934s: waiting for machine to come up
	I0318 13:49:30.897514 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:30.898041 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:30.898068 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:30.897988 1158507 retry.go:31] will retry after 2.213855703s: waiting for machine to come up
	I0318 13:49:30.272393 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.384351202s)
	I0318 13:49:30.272442 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 13:49:30.272459 1157416 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.379011748s)
	I0318 13:49:30.272477 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272508 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 13:49:30.272589 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:32.857821 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.585192694s)
	I0318 13:49:32.857907 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.585263486s)
	I0318 13:49:32.857990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 13:49:32.857918 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 13:49:32.858038 1157416 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:32.858097 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:33.113781 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:33.114303 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:33.114332 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:33.114245 1158507 retry.go:31] will retry after 2.075415123s: waiting for machine to come up
	I0318 13:49:35.191096 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:35.191631 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:35.191665 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:35.191582 1158507 retry.go:31] will retry after 3.520577528s: waiting for machine to come up
	I0318 13:49:36.677356 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.8192286s)
	I0318 13:49:36.677398 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 13:49:36.677423 1157416 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:36.677464 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:38.844843 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.167353366s)
	I0318 13:49:38.844895 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 13:49:38.844933 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.845020 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.713777 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:38.714129 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:38.714242 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:38.714143 1158507 retry.go:31] will retry after 3.46520277s: waiting for machine to come up
	I0318 13:49:42.181399 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181856 1157708 main.go:141] libmachine: (old-k8s-version-909137) Found IP for machine: 192.168.72.135
	I0318 13:49:42.181888 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has current primary IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181897 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserving static IP address...
	I0318 13:49:42.182344 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.182387 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | skip adding static IP to network mk-old-k8s-version-909137 - found existing host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"}
	I0318 13:49:42.182424 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserved static IP address: 192.168.72.135
	I0318 13:49:42.182453 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting for SSH to be available...
	I0318 13:49:42.182470 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Getting to WaitForSSH function...
	I0318 13:49:42.184589 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.184958 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.184999 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.185061 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH client type: external
	I0318 13:49:42.185120 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa (-rw-------)
	I0318 13:49:42.185162 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:42.185189 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | About to run SSH command:
	I0318 13:49:42.185204 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | exit 0
	I0318 13:49:42.312570 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:42.313005 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetConfigRaw
	I0318 13:49:42.313693 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.316497 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.316931 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.316965 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.317239 1157708 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json ...
	I0318 13:49:42.317442 1157708 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:42.317462 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:42.317688 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.320076 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320444 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.320485 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320655 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.320818 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.320980 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.321093 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.321257 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.321510 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.321528 1157708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:42.433138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:42.433186 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433524 1157708 buildroot.go:166] provisioning hostname "old-k8s-version-909137"
	I0318 13:49:42.433558 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433808 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.436869 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437230 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.437264 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437506 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.437739 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.437915 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.438092 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.438285 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.438513 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.438534 1157708 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-909137 && echo "old-k8s-version-909137" | sudo tee /etc/hostname
	I0318 13:49:42.560410 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-909137
	
	I0318 13:49:42.560439 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.563304 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563637 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.563673 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563837 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.564053 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564236 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564377 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.564581 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.564802 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.564820 1157708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-909137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-909137/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-909137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:42.687138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:42.687173 1157708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:42.687199 1157708 buildroot.go:174] setting up certificates
	I0318 13:49:42.687211 1157708 provision.go:84] configureAuth start
	I0318 13:49:42.687223 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.687600 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.690738 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.691179 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691316 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.693730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694070 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.694092 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694255 1157708 provision.go:143] copyHostCerts
	I0318 13:49:42.694336 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:42.694350 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:42.694422 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:42.694597 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:42.694614 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:42.694652 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:42.694747 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:42.694756 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:42.694775 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:42.694823 1157708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-909137 san=[127.0.0.1 192.168.72.135 localhost minikube old-k8s-version-909137]
	I0318 13:49:42.920182 1157708 provision.go:177] copyRemoteCerts
	I0318 13:49:42.920255 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:42.920295 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.923074 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923374 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.923408 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923533 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.923755 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.923957 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.924095 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.649771 1157887 start.go:364] duration metric: took 4m1.881584436s to acquireMachinesLock for "default-k8s-diff-port-569210"
	I0318 13:49:43.649850 1157887 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:43.649868 1157887 fix.go:54] fixHost starting: 
	I0318 13:49:43.650335 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:43.650378 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:43.668606 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0318 13:49:43.669107 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:43.669721 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:49:43.669755 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:43.670092 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:43.670269 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:49:43.670427 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:49:43.671973 1157887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-569210: state=Stopped err=<nil>
	I0318 13:49:43.672021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	W0318 13:49:43.672150 1157887 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:43.673832 1157887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-569210" ...
	I0318 13:49:40.621208 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.776156882s)
	I0318 13:49:40.621252 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 13:49:40.621281 1157416 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:40.621322 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:41.582256 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 13:49:41.582316 1157416 cache_images.go:123] Successfully loaded all cached images
	I0318 13:49:41.582324 1157416 cache_images.go:92] duration metric: took 16.405930257s to LoadCachedImages
	I0318 13:49:41.582341 1157416 kubeadm.go:928] updating node { 192.168.39.7 8443 v1.29.0-rc.2 crio true true} ...
	I0318 13:49:41.582550 1157416 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-537236 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:41.582663 1157416 ssh_runner.go:195] Run: crio config
	I0318 13:49:41.635043 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:41.635074 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:41.635093 1157416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:41.635128 1157416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-537236 NodeName:no-preload-537236 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:49:41.635322 1157416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-537236"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:41.635446 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 13:49:41.647072 1157416 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:41.647148 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:41.657448 1157416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0318 13:49:41.675819 1157416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 13:49:41.693989 1157416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0318 13:49:41.714954 1157416 ssh_runner.go:195] Run: grep 192.168.39.7	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:41.719161 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:41.732228 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:41.871286 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:41.892827 1157416 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236 for IP: 192.168.39.7
	I0318 13:49:41.892850 1157416 certs.go:194] generating shared ca certs ...
	I0318 13:49:41.892868 1157416 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:41.893054 1157416 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:41.893110 1157416 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:41.893125 1157416 certs.go:256] generating profile certs ...
	I0318 13:49:41.893246 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/client.key
	I0318 13:49:41.893317 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key.844e83a6
	I0318 13:49:41.893366 1157416 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key
	I0318 13:49:41.893482 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:41.893518 1157416 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:41.893528 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:41.893552 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:41.893573 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:41.893594 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:41.893628 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:41.894503 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:41.942278 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:41.978436 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:42.007161 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:42.036410 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:49:42.073179 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:42.098201 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:42.131599 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:42.159159 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:42.186290 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:42.214362 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:42.241240 1157416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:42.260511 1157416 ssh_runner.go:195] Run: openssl version
	I0318 13:49:42.267047 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:42.278582 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283566 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283609 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.289658 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:42.300954 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:42.312828 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319182 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319251 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.325767 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:42.337544 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:42.349053 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354197 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354249 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.361200 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:42.374825 1157416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:42.380098 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:42.387161 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:42.393702 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:42.400193 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:42.406243 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:42.412423 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:42.418599 1157416 kubeadm.go:391] StartCluster: {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:42.418747 1157416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:42.418785 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.468980 1157416 cri.go:89] found id: ""
	I0318 13:49:42.469088 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:42.481101 1157416 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:42.481130 1157416 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:42.481137 1157416 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:42.481190 1157416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:42.493014 1157416 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:42.494041 1157416 kubeconfig.go:125] found "no-preload-537236" server: "https://192.168.39.7:8443"
	I0318 13:49:42.496519 1157416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:42.507415 1157416 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.7
	I0318 13:49:42.507448 1157416 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:42.507460 1157416 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:42.507513 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.554791 1157416 cri.go:89] found id: ""
	I0318 13:49:42.554859 1157416 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:42.574054 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:42.584928 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:42.584955 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:42.585009 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:42.594987 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:42.595045 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:42.605058 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:42.614968 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:42.615042 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:42.625169 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.634838 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:42.634905 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.644785 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:42.654196 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:42.654254 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:42.663757 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:42.673956 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:42.792913 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:43.799012 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.006050828s)
	I0318 13:49:43.799075 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.061808 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.189349 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.329800 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:44.329897 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:44.829990 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:43.007024 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:43.033952 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 13:49:43.060218 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:49:43.086087 1157708 provision.go:87] duration metric: took 398.861833ms to configureAuth
	I0318 13:49:43.086116 1157708 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:43.086326 1157708 config.go:182] Loaded profile config "old-k8s-version-909137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 13:49:43.086442 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.089200 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089534 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.089562 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089758 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.089965 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090134 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090286 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.090501 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.090718 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.090744 1157708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:43.401681 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:43.401715 1157708 machine.go:97] duration metric: took 1.084258164s to provisionDockerMachine
	I0318 13:49:43.401728 1157708 start.go:293] postStartSetup for "old-k8s-version-909137" (driver="kvm2")
	I0318 13:49:43.401739 1157708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:43.401759 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.402073 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:43.402116 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.404775 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405164 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.405192 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405335 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.405525 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.405740 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.405884 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.493000 1157708 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:43.497705 1157708 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:43.497740 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:43.497818 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:43.497931 1157708 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:43.498058 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:43.509185 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:43.535401 1157708 start.go:296] duration metric: took 133.657179ms for postStartSetup
	I0318 13:49:43.535454 1157708 fix.go:56] duration metric: took 20.033670705s for fixHost
	I0318 13:49:43.535482 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.538464 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.538964 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.538998 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.539178 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.539386 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539528 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539702 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.539899 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.540120 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.540133 1157708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:43.649578 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769783.596310102
	
	I0318 13:49:43.649610 1157708 fix.go:216] guest clock: 1710769783.596310102
	I0318 13:49:43.649621 1157708 fix.go:229] Guest: 2024-03-18 13:49:43.596310102 +0000 UTC Remote: 2024-03-18 13:49:43.535459129 +0000 UTC m=+270.592972067 (delta=60.850973ms)
	I0318 13:49:43.649656 1157708 fix.go:200] guest clock delta is within tolerance: 60.850973ms
	I0318 13:49:43.649663 1157708 start.go:83] releasing machines lock for "old-k8s-version-909137", held for 20.147918331s
	I0318 13:49:43.649689 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.650002 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:43.652712 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653114 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.653148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653278 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.653873 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654112 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654198 1157708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:43.654264 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.654333 1157708 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:43.654369 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.657281 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657390 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657741 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.657830 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657855 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657918 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.658016 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658065 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.658199 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658245 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658326 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.658411 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658574 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.737787 1157708 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:43.769157 1157708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:43.920376 1157708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:43.928165 1157708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:43.928253 1157708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:43.946102 1157708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:43.946133 1157708 start.go:494] detecting cgroup driver to use...
	I0318 13:49:43.946210 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:43.963482 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:43.978540 1157708 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:43.978613 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:43.999525 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:44.021242 1157708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:44.198165 1157708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:44.363408 1157708 docker.go:233] disabling docker service ...
	I0318 13:49:44.363474 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:44.383527 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:44.398888 1157708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:44.547711 1157708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:44.662762 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:44.678786 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:44.702931 1157708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 13:49:44.703004 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.721453 1157708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:44.721519 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.739487 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.757379 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.777508 1157708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:44.798788 1157708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:44.814280 1157708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:44.814383 1157708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:44.836507 1157708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:44.852614 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:44.994352 1157708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:45.184815 1157708 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:45.184907 1157708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:45.190649 1157708 start.go:562] Will wait 60s for crictl version
	I0318 13:49:45.190724 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:45.195265 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:45.242737 1157708 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:45.242850 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.288154 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.331441 1157708 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 13:49:43.675531 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Start
	I0318 13:49:43.675763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring networks are active...
	I0318 13:49:43.676642 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network default is active
	I0318 13:49:43.677014 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network mk-default-k8s-diff-port-569210 is active
	I0318 13:49:43.677510 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Getting domain xml...
	I0318 13:49:43.678319 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Creating domain...
	I0318 13:49:45.002977 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting to get IP...
	I0318 13:49:45.003870 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004406 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004499 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.004392 1158648 retry.go:31] will retry after 294.950888ms: waiting for machine to come up
	I0318 13:49:45.301264 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301835 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301863 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.301747 1158648 retry.go:31] will retry after 291.810051ms: waiting for machine to come up
	I0318 13:49:45.595571 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596720 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596832 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.596786 1158648 retry.go:31] will retry after 390.232445ms: waiting for machine to come up
	I0318 13:49:45.988661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989534 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.989393 1158648 retry.go:31] will retry after 487.148784ms: waiting for machine to come up
	I0318 13:49:46.477982 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478667 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478701 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.478600 1158648 retry.go:31] will retry after 474.795485ms: waiting for machine to come up
	I0318 13:49:45.332975 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:45.336274 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336701 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:45.336753 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336985 1157708 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:45.343147 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:45.361840 1157708 kubeadm.go:877] updating cluster {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:45.361982 1157708 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:49:45.362040 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:45.419490 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:45.419587 1157708 ssh_runner.go:195] Run: which lz4
	I0318 13:49:45.424689 1157708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:49:45.431110 1157708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:49:45.431155 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 13:49:47.510385 1157708 crio.go:444] duration metric: took 2.085724633s to copy over tarball
	I0318 13:49:47.510483 1157708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:49:45.330925 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:45.364854 1157416 api_server.go:72] duration metric: took 1.035057096s to wait for apiserver process to appear ...
	I0318 13:49:45.364883 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:49:45.364927 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:45.365577 1157416 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": dial tcp 192.168.39.7:8443: connect: connection refused
	I0318 13:49:45.865126 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.135799 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.135840 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.135862 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.154112 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.154142 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.365566 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.375812 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.375862 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:49.865027 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.873132 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.873176 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.365178 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.371461 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.371506 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.865038 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.870329 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.870383 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:51.365030 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:51.370284 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:49:51.379599 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:49:51.379633 1157416 api_server.go:131] duration metric: took 6.014741397s to wait for apiserver health ...
	I0318 13:49:51.379645 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:51.379654 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:51.582399 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:49:46.955128 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955620 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955649 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.955579 1158648 retry.go:31] will retry after 817.278037ms: waiting for machine to come up
	I0318 13:49:47.774954 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775449 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775480 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:47.775391 1158648 retry.go:31] will retry after 1.032655883s: waiting for machine to come up
	I0318 13:49:48.810156 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810699 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810730 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:48.810644 1158648 retry.go:31] will retry after 1.1441145s: waiting for machine to come up
	I0318 13:49:49.956702 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957179 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957214 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:49.957105 1158648 retry.go:31] will retry after 1.428592019s: waiting for machine to come up
	I0318 13:49:51.387025 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387627 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387660 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:51.387555 1158648 retry.go:31] will retry after 2.266795202s: waiting for machine to come up
	I0318 13:49:50.947045 1157708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.436514023s)
	I0318 13:49:50.947084 1157708 crio.go:451] duration metric: took 3.436661543s to extract the tarball
	I0318 13:49:50.947095 1157708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:49:51.007406 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:51.048060 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:51.048091 1157708 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:51.048181 1157708 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.048228 1157708 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.048287 1157708 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.048346 1157708 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 13:49:51.048398 1157708 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.048432 1157708 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.048232 1157708 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.048183 1157708 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.049960 1157708 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.050268 1157708 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.050288 1157708 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.050355 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.050594 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.050627 1157708 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 13:49:51.050584 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.051230 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.219906 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.220734 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.235283 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.236445 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.246700 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 13:49:51.251299 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.311054 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.311292 1157708 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 13:49:51.311336 1157708 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.311389 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.343594 1157708 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 13:49:51.343649 1157708 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.343739 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.391608 1157708 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 13:49:51.391657 1157708 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.391706 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.448987 1157708 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 13:49:51.449029 1157708 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 13:49:51.449058 1157708 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.449061 1157708 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 13:49:51.449088 1157708 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.449103 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449035 1157708 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 13:49:51.449135 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.449178 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449207 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.449245 1157708 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 13:49:51.449267 1157708 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.449317 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449210 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449223 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.469614 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.469613 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.562455 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 13:49:51.562506 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.564170 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 13:49:51.564269 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 13:49:51.578471 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 13:49:51.615689 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 13:49:51.615708 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 13:49:51.657287 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 13:49:51.657361 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 13:49:51.956746 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:52.106933 1157708 cache_images.go:92] duration metric: took 1.058823514s to LoadCachedImages
	W0318 13:49:52.107046 1157708 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0318 13:49:52.107064 1157708 kubeadm.go:928] updating node { 192.168.72.135 8443 v1.20.0 crio true true} ...
	I0318 13:49:52.107259 1157708 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-909137 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:52.107348 1157708 ssh_runner.go:195] Run: crio config
	I0318 13:49:52.163493 1157708 cni.go:84] Creating CNI manager for ""
	I0318 13:49:52.163526 1157708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:52.163546 1157708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:52.163572 1157708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.135 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-909137 NodeName:old-k8s-version-909137 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 13:49:52.163740 1157708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-909137"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:52.163818 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 13:49:52.175668 1157708 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:52.175740 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:52.186745 1157708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 13:49:52.209877 1157708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:49:52.232921 1157708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 13:49:52.256571 1157708 ssh_runner.go:195] Run: grep 192.168.72.135	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:52.262776 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:52.278435 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:52.422705 1157708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:52.443710 1157708 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137 for IP: 192.168.72.135
	I0318 13:49:52.443740 1157708 certs.go:194] generating shared ca certs ...
	I0318 13:49:52.443760 1157708 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:52.443951 1157708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:52.444009 1157708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:52.444023 1157708 certs.go:256] generating profile certs ...
	I0318 13:49:52.444155 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.key
	I0318 13:49:52.444239 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key.e9806bd6
	I0318 13:49:52.444303 1157708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key
	I0318 13:49:52.444492 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:52.444532 1157708 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:52.444548 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:52.444585 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:52.444633 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:52.444672 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:52.444729 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:52.445363 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:52.506720 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:52.550057 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:52.586845 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:52.627933 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 13:49:52.681479 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:52.722052 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:52.755021 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:52.782181 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:52.808269 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:52.835041 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:52.863776 1157708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:52.883579 1157708 ssh_runner.go:195] Run: openssl version
	I0318 13:49:52.889846 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:52.902288 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908241 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908302 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.915392 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:52.928374 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:52.941444 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946463 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946514 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.953447 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:52.966231 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:52.977986 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982748 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982809 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.988715 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:51.626774 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:49:51.642685 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:49:51.669902 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:49:51.759474 1157416 system_pods.go:59] 8 kube-system pods found
	I0318 13:49:51.759519 1157416 system_pods.go:61] "coredns-76f75df574-kxzfm" [d0aad76d-f135-4d4a-a2f5-117707b4b2f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:49:51.759530 1157416 system_pods.go:61] "etcd-no-preload-537236" [d02ad01c-1b16-4b97-be18-237b1cbfe3aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:49:51.759539 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [00b05050-229b-47f4-9af2-12be1711200a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:49:51.759548 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [3e7b86df-4111-4bd9-8925-a22cf12e10ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:49:51.759552 1157416 system_pods.go:61] "kube-proxy-5dspp" [adee19a0-eeb6-438f-a55d-30f1e1d87ef6] Running
	I0318 13:49:51.759557 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [17628d51-80f5-4985-8ddb-151cab8f8c5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:49:51.759562 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-hhh5m" [282de489-beee-47a9-bd29-5da43cf70146] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:49:51.759565 1157416 system_pods.go:61] "storage-provisioner" [97d3de68-0863-4bba-9cb1-2ce98d791935] Running
	I0318 13:49:51.759578 1157416 system_pods.go:74] duration metric: took 89.654007ms to wait for pod list to return data ...
	I0318 13:49:51.759591 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:49:51.764164 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:49:51.764191 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:49:51.764204 1157416 node_conditions.go:105] duration metric: took 4.607295ms to run NodePressure ...
	I0318 13:49:51.764227 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:52.645812 1157416 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653573 1157416 kubeadm.go:733] kubelet initialised
	I0318 13:49:52.653602 1157416 kubeadm.go:734] duration metric: took 7.75557ms waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653614 1157416 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:49:52.662179 1157416 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:54.678656 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:53.656476 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656913 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656943 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:53.656870 1158648 retry.go:31] will retry after 2.341702781s: waiting for machine to come up
	I0318 13:49:56.001662 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002163 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002188 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:56.002106 1158648 retry.go:31] will retry after 2.885262489s: waiting for machine to come up
	I0318 13:49:53.000141 1157708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:53.005021 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:53.011156 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:53.018329 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:53.025687 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:53.032199 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:53.039048 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:53.045789 1157708 kubeadm.go:391] StartCluster: {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:53.045882 1157708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:53.045931 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.085682 1157708 cri.go:89] found id: ""
	I0318 13:49:53.085788 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:53.098063 1157708 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:53.098091 1157708 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:53.098098 1157708 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:53.098153 1157708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:53.109692 1157708 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:53.110853 1157708 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-909137" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:49:53.111862 1157708 kubeconfig.go:62] /home/jenkins/minikube-integration/18429-1106816/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-909137" cluster setting kubeconfig missing "old-k8s-version-909137" context setting]
	I0318 13:49:53.113334 1157708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:53.115135 1157708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:53.125910 1157708 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.135
	I0318 13:49:53.125949 1157708 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:53.125965 1157708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:53.126029 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.172181 1157708 cri.go:89] found id: ""
	I0318 13:49:53.172268 1157708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:53.189585 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:53.200744 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:53.200768 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:53.200811 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:53.211176 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:53.211250 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:53.221744 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:53.231342 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:53.231404 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:53.242162 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.252408 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:53.252480 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.262690 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:53.272829 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:53.272903 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:53.283287 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:53.294124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:53.437482 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.297415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.588919 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.758204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.863030 1157708 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:54.863140 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.363708 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.863301 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.364064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.863896 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.363240 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.212652 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:57.669562 1157416 pod_ready.go:92] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:57.669584 1157416 pod_ready.go:81] duration metric: took 5.007366512s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:57.669597 1157416 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176528 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:58.176557 1157416 pod_ready.go:81] duration metric: took 506.95201ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176570 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.888400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888706 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888742 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:58.888681 1158648 retry.go:31] will retry after 4.094701536s: waiting for machine to come up
	I0318 13:49:58.363294 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:58.864051 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.363586 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.863802 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.363862 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.864277 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.363381 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.864307 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.363278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.863315 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.309987 1157263 start.go:364] duration metric: took 57.988518292s to acquireMachinesLock for "embed-certs-173036"
	I0318 13:50:04.310046 1157263 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:50:04.310062 1157263 fix.go:54] fixHost starting: 
	I0318 13:50:04.310469 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:50:04.310506 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:50:04.330585 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0318 13:50:04.331049 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:50:04.331648 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:50:04.331684 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:50:04.332066 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:50:04.332316 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:04.332513 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:50:04.334091 1157263 fix.go:112] recreateIfNeeded on embed-certs-173036: state=Stopped err=<nil>
	I0318 13:50:04.334117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	W0318 13:50:04.334299 1157263 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:50:04.336146 1157263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-173036" ...
	I0318 13:50:00.184168 1157416 pod_ready.go:102] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:01.183846 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:01.183872 1157416 pod_ready.go:81] duration metric: took 3.007292631s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:01.183884 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:03.206725 1157416 pod_ready.go:102] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:04.691357 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.691391 1157416 pod_ready.go:81] duration metric: took 3.507497259s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.691410 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696593 1157416 pod_ready.go:92] pod "kube-proxy-5dspp" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.696618 1157416 pod_ready.go:81] duration metric: took 5.198628ms for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696627 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.700977 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.700995 1157416 pod_ready.go:81] duration metric: took 4.36095ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.701006 1157416 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:02.985340 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985804 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has current primary IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985818 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Found IP for machine: 192.168.61.3
	I0318 13:50:02.985828 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserving static IP address...
	I0318 13:50:02.986233 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.986292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | skip adding static IP to network mk-default-k8s-diff-port-569210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"}
	I0318 13:50:02.986307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserved static IP address: 192.168.61.3
	I0318 13:50:02.986321 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for SSH to be available...
	I0318 13:50:02.986337 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Getting to WaitForSSH function...
	I0318 13:50:02.988609 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.988962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.988995 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.989209 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH client type: external
	I0318 13:50:02.989235 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa (-rw-------)
	I0318 13:50:02.989272 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:02.989293 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | About to run SSH command:
	I0318 13:50:02.989306 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | exit 0
	I0318 13:50:03.112557 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:03.112907 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetConfigRaw
	I0318 13:50:03.113605 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.116140 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116569 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.116599 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116858 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:50:03.117065 1157887 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:03.117091 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:03.117296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.119506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.119861 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.119891 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.120015 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.120212 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120429 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120608 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.120798 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.120995 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.121010 1157887 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:03.221645 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:03.221693 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.221990 1157887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-569210"
	I0318 13:50:03.222027 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.222257 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.225134 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225543 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.225568 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225714 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.226022 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226225 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.226595 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.226870 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.226893 1157887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-569210 && echo "default-k8s-diff-port-569210" | sudo tee /etc/hostname
	I0318 13:50:03.350362 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-569210
	
	I0318 13:50:03.350398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.353307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353700 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.353737 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353911 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.354111 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354283 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354413 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.354600 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.354805 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.354824 1157887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-569210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-569210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-569210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:03.471084 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:03.471120 1157887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:03.471159 1157887 buildroot.go:174] setting up certificates
	I0318 13:50:03.471229 1157887 provision.go:84] configureAuth start
	I0318 13:50:03.471247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.471576 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.474528 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.474918 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.474957 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.475210 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.477624 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.477910 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.477936 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.478118 1157887 provision.go:143] copyHostCerts
	I0318 13:50:03.478196 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:03.478213 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:03.478281 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:03.478424 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:03.478437 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:03.478466 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:03.478537 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:03.478548 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:03.478571 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:03.478640 1157887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-569210 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-569210 localhost minikube]
	I0318 13:50:03.600956 1157887 provision.go:177] copyRemoteCerts
	I0318 13:50:03.601028 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:03.601058 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.603986 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604437 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.604468 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604659 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.604922 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.605086 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.605260 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:03.688256 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 13:50:03.716748 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:03.744848 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:03.771601 1157887 provision.go:87] duration metric: took 300.358039ms to configureAuth
	I0318 13:50:03.771631 1157887 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:03.771893 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:03.771992 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.774410 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774725 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.774760 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774926 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.775099 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775456 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.775642 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.775872 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.775901 1157887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:04.068202 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:04.068242 1157887 machine.go:97] duration metric: took 951.160051ms to provisionDockerMachine
	I0318 13:50:04.068259 1157887 start.go:293] postStartSetup for "default-k8s-diff-port-569210" (driver="kvm2")
	I0318 13:50:04.068277 1157887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:04.068303 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.068677 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:04.068712 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.071619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.071974 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.072002 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.072148 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.072354 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.072519 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.072639 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.157469 1157887 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:04.162629 1157887 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:04.162655 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:04.162719 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:04.162810 1157887 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:04.162911 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:04.173898 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:04.204771 1157887 start.go:296] duration metric: took 136.495479ms for postStartSetup
	I0318 13:50:04.204814 1157887 fix.go:56] duration metric: took 20.554947186s for fixHost
	I0318 13:50:04.204839 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.207619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.207923 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.207951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.208088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.208296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208509 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208657 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.208801 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:04.208975 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:04.208988 1157887 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:04.309828 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769804.283357411
	
	I0318 13:50:04.309861 1157887 fix.go:216] guest clock: 1710769804.283357411
	I0318 13:50:04.309871 1157887 fix.go:229] Guest: 2024-03-18 13:50:04.283357411 +0000 UTC Remote: 2024-03-18 13:50:04.204818975 +0000 UTC m=+262.583280441 (delta=78.538436ms)
	I0318 13:50:04.309898 1157887 fix.go:200] guest clock delta is within tolerance: 78.538436ms
	I0318 13:50:04.309904 1157887 start.go:83] releasing machines lock for "default-k8s-diff-port-569210", held for 20.660081187s
	I0318 13:50:04.309933 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.310247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:04.313302 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313747 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.313777 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313956 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314591 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314792 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314878 1157887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:04.314934 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.315014 1157887 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:04.315059 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.318021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318056 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318438 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318474 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318500 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318518 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318879 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.318962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.319052 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319110 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319229 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.319286 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.426710 1157887 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:04.433482 1157887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:04.590331 1157887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:04.598896 1157887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:04.598974 1157887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:04.617060 1157887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:04.617095 1157887 start.go:494] detecting cgroup driver to use...
	I0318 13:50:04.617190 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:04.633902 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:04.648705 1157887 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:04.648759 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:04.665516 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:04.681326 1157887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:04.798310 1157887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:04.972066 1157887 docker.go:233] disabling docker service ...
	I0318 13:50:04.972133 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:04.995498 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:05.014901 1157887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:05.158158 1157887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:05.309791 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:05.324965 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:05.346489 1157887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:05.346595 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.358753 1157887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:05.358823 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.374416 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.394228 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.406975 1157887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:05.420201 1157887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:05.432405 1157887 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:05.432479 1157887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:05.449386 1157887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:05.461081 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:05.607102 1157887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:05.776152 1157887 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:05.776267 1157887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:05.782168 1157887 start.go:562] Will wait 60s for crictl version
	I0318 13:50:05.782247 1157887 ssh_runner.go:195] Run: which crictl
	I0318 13:50:05.787932 1157887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:05.831304 1157887 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:05.831399 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.865410 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.908406 1157887 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:05.909651 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:05.912855 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913213 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:05.913256 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913470 1157887 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:05.918362 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:05.933755 1157887 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:05.933926 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:05.934002 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:05.978920 1157887 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:05.978998 1157887 ssh_runner.go:195] Run: which lz4
	I0318 13:50:05.983751 1157887 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:05.988862 1157887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:05.988895 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:03.363591 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:03.864049 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.363310 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.863306 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.363706 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.863618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.364183 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.863776 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.863261 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.337631 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Start
	I0318 13:50:04.337838 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring networks are active...
	I0318 13:50:04.338615 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network default is active
	I0318 13:50:04.338978 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network mk-embed-certs-173036 is active
	I0318 13:50:04.339444 1157263 main.go:141] libmachine: (embed-certs-173036) Getting domain xml...
	I0318 13:50:04.340295 1157263 main.go:141] libmachine: (embed-certs-173036) Creating domain...
	I0318 13:50:05.616437 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting to get IP...
	I0318 13:50:05.617646 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.618096 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.618168 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.618075 1158806 retry.go:31] will retry after 234.69885ms: waiting for machine to come up
	I0318 13:50:05.854749 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.855365 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.855401 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.855310 1158806 retry.go:31] will retry after 324.015594ms: waiting for machine to come up
	I0318 13:50:06.181178 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.182089 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.182123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.182038 1158806 retry.go:31] will retry after 456.172304ms: waiting for machine to come up
	I0318 13:50:06.639827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.640288 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.640349 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.640244 1158806 retry.go:31] will retry after 561.082549ms: waiting for machine to come up
	I0318 13:50:07.203208 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.203798 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.203825 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.203696 1158806 retry.go:31] will retry after 633.905437ms: waiting for machine to come up
	I0318 13:50:07.839205 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.839760 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.839792 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.839698 1158806 retry.go:31] will retry after 629.254629ms: waiting for machine to come up
	I0318 13:50:08.470625 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:08.471073 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:08.471105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:08.471021 1158806 retry.go:31] will retry after 771.526268ms: waiting for machine to come up
	I0318 13:50:06.709604 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:09.208197 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:08.056220 1157887 crio.go:444] duration metric: took 2.072501191s to copy over tarball
	I0318 13:50:08.056361 1157887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:10.763501 1157887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.707101479s)
	I0318 13:50:10.763560 1157887 crio.go:451] duration metric: took 2.707303654s to extract the tarball
	I0318 13:50:10.763570 1157887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:10.808643 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:10.860178 1157887 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:10.860218 1157887 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:10.860229 1157887 kubeadm.go:928] updating node { 192.168.61.3 8444 v1.28.4 crio true true} ...
	I0318 13:50:10.860381 1157887 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-569210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:10.860455 1157887 ssh_runner.go:195] Run: crio config
	I0318 13:50:10.918077 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:10.918109 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:10.918124 1157887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:10.918154 1157887 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-569210 NodeName:default-k8s-diff-port-569210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:10.918362 1157887 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-569210"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:10.918457 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:10.930573 1157887 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:10.930639 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:10.941181 1157887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0318 13:50:10.960048 1157887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:10.980367 1157887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0318 13:50:11.001607 1157887 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:11.006363 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:11.020871 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:11.164152 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:11.185025 1157887 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210 for IP: 192.168.61.3
	I0318 13:50:11.185060 1157887 certs.go:194] generating shared ca certs ...
	I0318 13:50:11.185096 1157887 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:11.185277 1157887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:11.185342 1157887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:11.185356 1157887 certs.go:256] generating profile certs ...
	I0318 13:50:11.185464 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/client.key
	I0318 13:50:11.185541 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key.e15332a5
	I0318 13:50:11.185590 1157887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key
	I0318 13:50:11.185757 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:11.185799 1157887 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:11.185812 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:11.185841 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:11.185899 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:11.185945 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:11.185999 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:11.186853 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:11.221967 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:11.250180 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:11.287449 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:11.323521 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 13:50:11.360286 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:11.396947 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:11.426116 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:50:11.455183 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:11.483479 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:11.512975 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:11.548393 1157887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:11.569155 1157887 ssh_runner.go:195] Run: openssl version
	I0318 13:50:11.576084 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:11.589110 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594640 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594736 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.601473 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:11.615874 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:11.630380 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635808 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635886 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.644465 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:11.661509 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:08.364243 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:08.863539 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.364037 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.863422 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.363353 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.863485 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.363548 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.864070 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.243731 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:09.244146 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:09.244180 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:09.244104 1158806 retry.go:31] will retry after 1.160252016s: waiting for machine to come up
	I0318 13:50:10.405805 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:10.406270 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:10.406310 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:10.406201 1158806 retry.go:31] will retry after 1.625913099s: waiting for machine to come up
	I0318 13:50:12.033202 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:12.033674 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:12.033712 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:12.033589 1158806 retry.go:31] will retry after 1.835793865s: waiting for machine to come up
	I0318 13:50:11.211241 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:13.710211 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:11.675340 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938009 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938089 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.944766 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:11.957959 1157887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:11.963524 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:11.971678 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:11.978601 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:11.985403 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:11.992159 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:11.998620 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:12.005209 1157887 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:12.005300 1157887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:12.005350 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.074518 1157887 cri.go:89] found id: ""
	I0318 13:50:12.074603 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:12.099031 1157887 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:12.099062 1157887 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:12.099070 1157887 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:12.099147 1157887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:12.111133 1157887 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:12.112779 1157887 kubeconfig.go:125] found "default-k8s-diff-port-569210" server: "https://192.168.61.3:8444"
	I0318 13:50:12.116521 1157887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:12.134902 1157887 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.3
	I0318 13:50:12.134964 1157887 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:12.135005 1157887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:12.135086 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.190100 1157887 cri.go:89] found id: ""
	I0318 13:50:12.190182 1157887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:12.211556 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:12.223095 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:12.223120 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:12.223173 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:50:12.235709 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:12.235780 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:12.248896 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:50:12.260212 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:12.260285 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:12.271424 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.283083 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:12.283143 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.294877 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:50:12.305629 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:12.305692 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:12.317395 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:12.328943 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:12.471901 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.400723 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.601149 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.677768 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.796413 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:13.796558 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.297639 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.797236 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.885767 1157887 api_server.go:72] duration metric: took 1.089353166s to wait for apiserver process to appear ...
	I0318 13:50:14.885801 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:14.885827 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:14.886464 1157887 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0318 13:50:15.386913 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:13.364111 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.863871 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.363958 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.863570 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.364185 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.863974 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.364010 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.863484 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.864149 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.871003 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:13.871443 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:13.871475 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:13.871398 1158806 retry.go:31] will retry after 2.53403994s: waiting for machine to come up
	I0318 13:50:16.407271 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:16.407728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:16.407775 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:16.407708 1158806 retry.go:31] will retry after 2.371916928s: waiting for machine to come up
	I0318 13:50:18.781468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:18.781866 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:18.781898 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:18.781809 1158806 retry.go:31] will retry after 3.250042198s: waiting for machine to come up
	I0318 13:50:17.204788 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.204828 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.204848 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.235957 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.236000 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.386349 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.393185 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.393220 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:17.886583 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.892087 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.892122 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.386820 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.406609 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:18.406658 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.886458 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.896097 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:50:18.905565 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:18.905603 1157887 api_server.go:131] duration metric: took 4.019792975s to wait for apiserver health ...
	I0318 13:50:18.905615 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:18.905624 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:18.907258 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:15.711910 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.209648 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.909133 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:18.944457 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:18.973831 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:18.984400 1157887 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:18.984436 1157887 system_pods.go:61] "coredns-5dd5756b68-hwsz5" [0a91f20c-3d3b-415c-b709-7898c606d830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:18.984447 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [64925324-9666-49ab-b849-ad9b7ce54891] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:18.984456 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [8409a63f-fbac-4bf9-b54b-5ac267a58206] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:18.984465 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [a2d7b983-c4aa-4c32-9391-babe90b0f102] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:18.984470 1157887 system_pods.go:61] "kube-proxy-v59ks" [39a4e73c-319d-4093-8781-ca7a1a48e005] Running
	I0318 13:50:18.984477 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [f24baa89-e33d-42ca-8f83-17c76a4cedcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:18.984488 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-2sb4m" [f3e533a7-9666-4b79-b9a9-26222422f242] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:18.984496 1157887 system_pods.go:61] "storage-provisioner" [864d0bb2-cbca-41ae-b9ec-89aced62dd08] Running
	I0318 13:50:18.984505 1157887 system_pods.go:74] duration metric: took 10.646849ms to wait for pod list to return data ...
	I0318 13:50:18.984519 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:18.989173 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:18.989201 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:18.989213 1157887 node_conditions.go:105] duration metric: took 4.685756ms to run NodePressure ...
	I0318 13:50:18.989231 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:19.229166 1157887 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237757 1157887 kubeadm.go:733] kubelet initialised
	I0318 13:50:19.237787 1157887 kubeadm.go:734] duration metric: took 8.591388ms waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237797 1157887 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:19.243530 1157887 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.253925 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253957 1157887 pod_ready.go:81] duration metric: took 10.403116ms for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.253969 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253978 1157887 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.265167 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265189 1157887 pod_ready.go:81] duration metric: took 11.202545ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.265200 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265206 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.273558 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273589 1157887 pod_ready.go:81] duration metric: took 8.37478ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.273603 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273615 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:21.280970 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.363366 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:18.863782 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.363987 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.863437 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.364050 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.863961 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.364126 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.863264 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.363519 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.033540 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:22.034056 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:22.034084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:22.034001 1158806 retry.go:31] will retry after 5.297432528s: waiting for machine to come up
	I0318 13:50:20.708189 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:22.708573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:24.708632 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.281625 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:25.780754 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.364019 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:23.864134 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.363510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.863263 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.364027 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.863203 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.364219 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.863262 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.363889 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.864113 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.335390 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335875 1157263 main.go:141] libmachine: (embed-certs-173036) Found IP for machine: 192.168.50.191
	I0318 13:50:27.335908 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has current primary IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335918 1157263 main.go:141] libmachine: (embed-certs-173036) Reserving static IP address...
	I0318 13:50:27.336311 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.336360 1157263 main.go:141] libmachine: (embed-certs-173036) Reserved static IP address: 192.168.50.191
	I0318 13:50:27.336380 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | skip adding static IP to network mk-embed-certs-173036 - found existing host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"}
	I0318 13:50:27.336394 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Getting to WaitForSSH function...
	I0318 13:50:27.336406 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting for SSH to be available...
	I0318 13:50:27.338627 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.338948 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.338983 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.339087 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH client type: external
	I0318 13:50:27.339177 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa (-rw-------)
	I0318 13:50:27.339212 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:27.339227 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | About to run SSH command:
	I0318 13:50:27.339244 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | exit 0
	I0318 13:50:27.468468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:27.468936 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetConfigRaw
	I0318 13:50:27.469699 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.472098 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472422 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.472446 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472714 1157263 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/config.json ...
	I0318 13:50:27.472955 1157263 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:27.472982 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:27.473196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.475516 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.475808 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.475831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.476041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.476252 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476414 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476537 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.476719 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.476899 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.476909 1157263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:27.589501 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:27.589532 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.589828 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:50:27.589862 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.590068 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.592650 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593005 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.593035 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593186 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.593375 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593546 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593713 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.593883 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.594058 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.594073 1157263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-173036 && echo "embed-certs-173036" | sudo tee /etc/hostname
	I0318 13:50:27.730406 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173036
	
	I0318 13:50:27.730437 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.733420 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.733857 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.733890 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.734058 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.734271 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734475 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734609 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.734764 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.734943 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.734960 1157263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-173036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-173036/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-173036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:27.860625 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:27.860679 1157263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:27.860777 1157263 buildroot.go:174] setting up certificates
	I0318 13:50:27.860790 1157263 provision.go:84] configureAuth start
	I0318 13:50:27.860810 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.861112 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.864215 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864667 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.864703 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864956 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.867381 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867690 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.867730 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867893 1157263 provision.go:143] copyHostCerts
	I0318 13:50:27.867963 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:27.867977 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:27.868048 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:27.868183 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:27.868198 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:27.868231 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:27.868307 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:27.868318 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:27.868372 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:27.868451 1157263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.embed-certs-173036 san=[127.0.0.1 192.168.50.191 embed-certs-173036 localhost minikube]
	I0318 13:50:28.001671 1157263 provision.go:177] copyRemoteCerts
	I0318 13:50:28.001742 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:28.001773 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.004389 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.004746 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.004777 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.005021 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.005214 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.005393 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.005558 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.095871 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 13:50:28.127356 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:28.157301 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:28.186185 1157263 provision.go:87] duration metric: took 325.374328ms to configureAuth
	I0318 13:50:28.186217 1157263 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:28.186424 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:28.186529 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.189135 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189532 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.189564 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189719 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.189933 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190127 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.190492 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.190654 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.190668 1157263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:28.473836 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:28.473875 1157263 machine.go:97] duration metric: took 1.000902962s to provisionDockerMachine
	I0318 13:50:28.473887 1157263 start.go:293] postStartSetup for "embed-certs-173036" (driver="kvm2")
	I0318 13:50:28.473898 1157263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:28.473914 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.474270 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:28.474307 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.477165 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477571 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.477619 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477756 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.477966 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.478135 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.478296 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.568988 1157263 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:28.573759 1157263 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:28.573782 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:28.573839 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:28.573909 1157263 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:28.573989 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:28.584049 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:28.610999 1157263 start.go:296] duration metric: took 137.09711ms for postStartSetup
	I0318 13:50:28.611043 1157263 fix.go:56] duration metric: took 24.300980779s for fixHost
	I0318 13:50:28.611066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.614123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614582 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.614628 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614795 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.614999 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615124 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615255 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.615427 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.615617 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.615631 1157263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:28.729856 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769828.678644307
	
	I0318 13:50:28.729894 1157263 fix.go:216] guest clock: 1710769828.678644307
	I0318 13:50:28.729913 1157263 fix.go:229] Guest: 2024-03-18 13:50:28.678644307 +0000 UTC Remote: 2024-03-18 13:50:28.611048079 +0000 UTC m=+364.845703282 (delta=67.596228ms)
	I0318 13:50:28.729932 1157263 fix.go:200] guest clock delta is within tolerance: 67.596228ms
	I0318 13:50:28.729937 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 24.419922158s
	I0318 13:50:28.729958 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.730241 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:28.732831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733196 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.733249 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733406 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.733875 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734172 1157263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:28.734248 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.734330 1157263 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:28.734376 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.737014 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737200 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737444 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737470 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737611 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737694 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737721 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737918 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737926 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738195 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738292 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.738357 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738466 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:26.708824 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.209974 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:28.818695 1157263 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:28.844173 1157263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:28.995017 1157263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:29.002150 1157263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:29.002251 1157263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:29.021165 1157263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:29.021200 1157263 start.go:494] detecting cgroup driver to use...
	I0318 13:50:29.021286 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:29.039060 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:29.053451 1157263 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:29.053521 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:29.069721 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:29.085285 1157263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:29.201273 1157263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:29.356314 1157263 docker.go:233] disabling docker service ...
	I0318 13:50:29.356406 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:29.374159 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:29.390280 1157263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:29.542126 1157263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:29.692068 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:29.707760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:29.735684 1157263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:29.735753 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.751291 1157263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:29.751365 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.763159 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.774837 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.787142 1157263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:29.799773 1157263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:29.810620 1157263 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:29.810691 1157263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:29.826816 1157263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:29.842059 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:29.985531 1157263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:30.147122 1157263 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:30.147191 1157263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:30.152406 1157263 start.go:562] Will wait 60s for crictl version
	I0318 13:50:30.152468 1157263 ssh_runner.go:195] Run: which crictl
	I0318 13:50:30.157019 1157263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:30.199810 1157263 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:30.199889 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.232028 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.270484 1157263 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:27.781584 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.795969 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:31.282868 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.282899 1157887 pod_ready.go:81] duration metric: took 12.009270978s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.282910 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290886 1157887 pod_ready.go:92] pod "kube-proxy-v59ks" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.290917 1157887 pod_ready.go:81] duration metric: took 7.99936ms for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290931 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300197 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.300235 1157887 pod_ready.go:81] duration metric: took 9.294232ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300254 1157887 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:28.364069 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:28.863405 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.363996 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.863574 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.363749 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.863564 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.363250 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.863320 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.363894 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.864166 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.271939 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:30.275084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.275682 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:30.275728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.276045 1157263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:30.282421 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:30.299013 1157263 kubeadm.go:877] updating cluster {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:30.299280 1157263 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:30.299364 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:30.349617 1157263 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:30.349720 1157263 ssh_runner.go:195] Run: which lz4
	I0318 13:50:30.354659 1157263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:30.359861 1157263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:30.359903 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:32.362707 1157263 crio.go:444] duration metric: took 2.008087158s to copy over tarball
	I0318 13:50:32.362796 1157263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:31.210766 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.709661 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.308081 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:35.309291 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:33.864021 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.363963 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.864011 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.364122 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.863559 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.364154 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.364232 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.863934 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.265803 1157263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.902966349s)
	I0318 13:50:35.265827 1157263 crio.go:451] duration metric: took 2.903086385s to extract the tarball
	I0318 13:50:35.265835 1157263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:35.313875 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:35.378361 1157263 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:35.378392 1157263 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:35.378408 1157263 kubeadm.go:928] updating node { 192.168.50.191 8443 v1.28.4 crio true true} ...
	I0318 13:50:35.378551 1157263 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-173036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:35.378648 1157263 ssh_runner.go:195] Run: crio config
	I0318 13:50:35.443472 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:35.443501 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:35.443520 1157263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:35.443551 1157263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.191 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-173036 NodeName:embed-certs-173036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:35.443730 1157263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-173036"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:35.443809 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:35.455284 1157263 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:35.455352 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:35.465886 1157263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 13:50:35.487345 1157263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:35.507361 1157263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 13:50:35.528055 1157263 ssh_runner.go:195] Run: grep 192.168.50.191	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:35.533287 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:35.548295 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:35.684165 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:35.703884 1157263 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036 for IP: 192.168.50.191
	I0318 13:50:35.703910 1157263 certs.go:194] generating shared ca certs ...
	I0318 13:50:35.703927 1157263 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:35.704117 1157263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:35.704186 1157263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:35.704200 1157263 certs.go:256] generating profile certs ...
	I0318 13:50:35.704292 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/client.key
	I0318 13:50:35.704406 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key.527b6b30
	I0318 13:50:35.704472 1157263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key
	I0318 13:50:35.704637 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:35.704680 1157263 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:35.704694 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:35.704729 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:35.704763 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:35.704796 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:35.704857 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:35.705836 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:35.768912 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:35.830564 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:35.877813 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:35.916756 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 13:50:35.948397 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:35.980450 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:36.009626 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:50:36.040155 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:36.068885 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:36.098638 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:36.128423 1157263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:36.149584 1157263 ssh_runner.go:195] Run: openssl version
	I0318 13:50:36.156347 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:36.169729 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175367 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175438 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.181995 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:36.193987 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:36.206444 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212355 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212442 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.219042 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:36.231882 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:36.244590 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250443 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250511 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.257713 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:36.271026 1157263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:36.276902 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:36.285465 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:36.294274 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:36.302415 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:36.310867 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:36.318931 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:36.327627 1157263 kubeadm.go:391] StartCluster: {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:36.327781 1157263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:36.327843 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.376644 1157263 cri.go:89] found id: ""
	I0318 13:50:36.376741 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:36.389506 1157263 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:36.389528 1157263 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:36.389533 1157263 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:36.389640 1157263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:36.401386 1157263 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:36.402631 1157263 kubeconfig.go:125] found "embed-certs-173036" server: "https://192.168.50.191:8443"
	I0318 13:50:36.404833 1157263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:36.416975 1157263 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.191
	I0318 13:50:36.417026 1157263 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:36.417041 1157263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:36.417106 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.458072 1157263 cri.go:89] found id: ""
	I0318 13:50:36.458162 1157263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:36.476557 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:36.487765 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:36.487791 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:36.487857 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:50:36.498903 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:36.498982 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:36.510205 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:50:36.520423 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:36.520476 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:36.531864 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.542058 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:36.542131 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.552807 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:50:36.562840 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:36.562915 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:36.573581 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:36.583760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:36.719884 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.681007 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.914386 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.993967 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:38.101144 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:38.101261 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.602138 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.711725 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:37.807508 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:39.809153 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.363994 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.863278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.363665 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.863948 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.364081 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.864124 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.363964 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.863593 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.363750 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.864002 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.102040 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.212769 1157263 api_server.go:72] duration metric: took 1.111626123s to wait for apiserver process to appear ...
	I0318 13:50:39.212807 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:39.212840 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:39.213446 1157263 api_server.go:269] stopped: https://192.168.50.191:8443/healthz: Get "https://192.168.50.191:8443/healthz": dial tcp 192.168.50.191:8443: connect: connection refused
	I0318 13:50:39.713482 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.646306 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.646352 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.646370 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.691920 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.691953 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.713082 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.770065 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:42.770101 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.213524 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.224669 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.224710 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.712987 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.718490 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.718533 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:44.213026 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:44.217876 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:50:44.225562 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:44.225588 1157263 api_server.go:131] duration metric: took 5.012774227s to wait for apiserver health ...
	I0318 13:50:44.225610 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:44.225618 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:44.227565 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:40.210029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:42.210435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:44.710674 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:41.811414 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.818645 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:46.308757 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.364189 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:43.863868 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.363454 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.863940 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.363913 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.863288 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.363884 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.863361 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.363383 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.864064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.229055 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:44.260389 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:44.310001 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:44.327281 1157263 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:44.327330 1157263 system_pods.go:61] "coredns-5dd5756b68-zsfvm" [1404c3fe-6538-4aaf-80f5-599275240731] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:44.327342 1157263 system_pods.go:61] "etcd-embed-certs-173036" [254a577c-bd3b-4645-9c92-1479b0c6d0c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:44.327354 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [5a738280-05ba-413e-a288-4c4d07ddbd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:44.327362 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [f48cfb7f-1efe-4941-b328-2358c7a5cced] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:44.327369 1157263 system_pods.go:61] "kube-proxy-xqf68" [969de4e5-fc60-4d46-b336-49f22a9b6c38] Running
	I0318 13:50:44.327376 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [e0579c16-de3e-4915-9ed2-f69b53f6f884] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:44.327385 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-5cv2z" [85649bfb-f91f-4bfe-9356-d540ac3d6a68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:44.327392 1157263 system_pods.go:61] "storage-provisioner" [0c1ec131-0f6c-4e01-aaec-5011f1a4fe75] Running
	I0318 13:50:44.327410 1157263 system_pods.go:74] duration metric: took 17.376754ms to wait for pod list to return data ...
	I0318 13:50:44.327423 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:44.332965 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:44.332997 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:44.333008 1157263 node_conditions.go:105] duration metric: took 5.580934ms to run NodePressure ...
	I0318 13:50:44.333027 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:44.573923 1157263 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578504 1157263 kubeadm.go:733] kubelet initialised
	I0318 13:50:44.578526 1157263 kubeadm.go:734] duration metric: took 4.577181ms waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578534 1157263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:44.584361 1157263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.591714 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591739 1157263 pod_ready.go:81] duration metric: took 7.35191ms for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.591746 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591753 1157263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.597618 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597641 1157263 pod_ready.go:81] duration metric: took 5.880276ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.597649 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597655 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.604124 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604148 1157263 pod_ready.go:81] duration metric: took 6.484251ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.604157 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604164 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:46.611326 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:47.209538 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:49.708718 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.309157 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.808340 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.363218 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:48.864086 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.363457 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.863292 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.363308 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.863428 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.363583 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.863562 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.363995 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.863463 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.111834 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.114329 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.114356 1157263 pod_ready.go:81] duration metric: took 5.510175425s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.114369 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133169 1157263 pod_ready.go:92] pod "kube-proxy-xqf68" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.133196 1157263 pod_ready.go:81] duration metric: took 18.819059ms for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133208 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:52.144639 1157263 pod_ready.go:102] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:51.709823 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:54.207738 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.311033 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:55.311439 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.363919 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:53.863936 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.363671 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.863567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:54.863709 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:54.911905 1157708 cri.go:89] found id: ""
	I0318 13:50:54.911942 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.911954 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:54.911962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:54.912031 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:54.962141 1157708 cri.go:89] found id: ""
	I0318 13:50:54.962170 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.962182 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:54.962188 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:54.962269 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:55.001597 1157708 cri.go:89] found id: ""
	I0318 13:50:55.001639 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.001652 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:55.001660 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:55.001725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:55.042660 1157708 cri.go:89] found id: ""
	I0318 13:50:55.042695 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.042708 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:55.042716 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:55.042775 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:55.082095 1157708 cri.go:89] found id: ""
	I0318 13:50:55.082128 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.082139 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:55.082146 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:55.082211 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:55.120938 1157708 cri.go:89] found id: ""
	I0318 13:50:55.120969 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.121000 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:55.121008 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:55.121081 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:55.159247 1157708 cri.go:89] found id: ""
	I0318 13:50:55.159280 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.159292 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:55.159300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:55.159366 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:55.200130 1157708 cri.go:89] found id: ""
	I0318 13:50:55.200161 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.200170 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:55.200180 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:55.200193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:55.254113 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:55.254154 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:55.268984 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:55.269027 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:55.402079 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:55.402106 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:55.402123 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:55.468627 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:55.468674 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:54.143220 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:54.143247 1157263 pod_ready.go:81] duration metric: took 4.010031997s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:54.143258 1157263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:56.151615 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.650293 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:56.208339 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.209144 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:57.810894 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.308972 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.016860 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:58.031684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:58.031747 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:58.073389 1157708 cri.go:89] found id: ""
	I0318 13:50:58.073415 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.073427 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:58.073434 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:58.073497 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:58.114439 1157708 cri.go:89] found id: ""
	I0318 13:50:58.114471 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.114483 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:58.114490 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:58.114553 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:58.165440 1157708 cri.go:89] found id: ""
	I0318 13:50:58.165466 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.165476 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:58.165484 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:58.165569 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:58.207083 1157708 cri.go:89] found id: ""
	I0318 13:50:58.207117 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.207129 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:58.207137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:58.207227 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:58.252945 1157708 cri.go:89] found id: ""
	I0318 13:50:58.252973 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.252985 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:58.252993 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:58.253055 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:58.292437 1157708 cri.go:89] found id: ""
	I0318 13:50:58.292464 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.292474 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:58.292480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:58.292530 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:58.335359 1157708 cri.go:89] found id: ""
	I0318 13:50:58.335403 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.335415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:58.335423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:58.335511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:58.381434 1157708 cri.go:89] found id: ""
	I0318 13:50:58.381473 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.381484 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:58.381494 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:58.381511 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:58.432270 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:58.432319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:58.447658 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:58.447686 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:58.523163 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:58.523186 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:58.523207 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:58.599544 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:58.599586 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:01.141653 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:01.156996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:01.157070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:01.192720 1157708 cri.go:89] found id: ""
	I0318 13:51:01.192762 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.192775 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:01.192785 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:01.192866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:01.232678 1157708 cri.go:89] found id: ""
	I0318 13:51:01.232705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.232716 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:01.232723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:01.232795 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:01.270637 1157708 cri.go:89] found id: ""
	I0318 13:51:01.270666 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.270676 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:01.270684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:01.270746 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:01.308891 1157708 cri.go:89] found id: ""
	I0318 13:51:01.308921 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.308931 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:01.308939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:01.309003 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:01.349301 1157708 cri.go:89] found id: ""
	I0318 13:51:01.349334 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.349346 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:01.349354 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:01.349420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:01.394010 1157708 cri.go:89] found id: ""
	I0318 13:51:01.394039 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.394047 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:01.394053 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:01.394103 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:01.432778 1157708 cri.go:89] found id: ""
	I0318 13:51:01.432804 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.432815 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:01.432823 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:01.432886 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:01.471974 1157708 cri.go:89] found id: ""
	I0318 13:51:01.472002 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.472011 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:01.472022 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:01.472040 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:01.524855 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:01.524893 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:01.540939 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:01.540967 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:01.618318 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:01.618350 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:01.618367 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:01.695717 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:01.695755 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:00.650906 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.651512 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.211620 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.708336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.312320 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.808301 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.241781 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:04.256276 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:04.256373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:04.297129 1157708 cri.go:89] found id: ""
	I0318 13:51:04.297158 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.297170 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:04.297179 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:04.297247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:04.341743 1157708 cri.go:89] found id: ""
	I0318 13:51:04.341774 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.341786 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:04.341793 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:04.341858 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:04.384400 1157708 cri.go:89] found id: ""
	I0318 13:51:04.384434 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.384445 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:04.384453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:04.384510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:04.425459 1157708 cri.go:89] found id: ""
	I0318 13:51:04.425487 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.425500 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:04.425510 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:04.425563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:04.463091 1157708 cri.go:89] found id: ""
	I0318 13:51:04.463125 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.463137 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:04.463145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:04.463210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:04.503023 1157708 cri.go:89] found id: ""
	I0318 13:51:04.503057 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.503069 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:04.503077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:04.503141 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:04.542083 1157708 cri.go:89] found id: ""
	I0318 13:51:04.542116 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.542127 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:04.542136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:04.542207 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:04.583097 1157708 cri.go:89] found id: ""
	I0318 13:51:04.583128 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.583137 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:04.583146 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:04.583161 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:04.650476 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:04.650518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:04.706073 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:04.706111 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:04.723595 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:04.723628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:04.800278 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:04.800301 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:04.800316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:07.388144 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:07.403636 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:07.403711 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:07.443337 1157708 cri.go:89] found id: ""
	I0318 13:51:07.443365 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.443379 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:07.443386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:07.443442 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:07.482417 1157708 cri.go:89] found id: ""
	I0318 13:51:07.482453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.482462 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:07.482469 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:07.482521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:07.518445 1157708 cri.go:89] found id: ""
	I0318 13:51:07.518474 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.518485 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:07.518493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:07.518563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:07.555628 1157708 cri.go:89] found id: ""
	I0318 13:51:07.555661 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.555673 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:07.555681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:07.555760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:07.593805 1157708 cri.go:89] found id: ""
	I0318 13:51:07.593842 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.593856 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:07.593873 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:07.593936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:07.638206 1157708 cri.go:89] found id: ""
	I0318 13:51:07.638234 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.638242 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:07.638249 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:07.638313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:07.679526 1157708 cri.go:89] found id: ""
	I0318 13:51:07.679561 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.679573 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:07.679581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:07.679635 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:07.724468 1157708 cri.go:89] found id: ""
	I0318 13:51:07.724494 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.724504 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:07.724516 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:07.724533 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:07.766491 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:07.766522 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:07.823782 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:07.823833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:07.839316 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:07.839342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:07.924790 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:07.924821 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:07.924841 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:05.151629 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.651485 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:05.210455 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.709381 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.310000 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:09.808337 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.513618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:10.528711 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:10.528790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:10.571217 1157708 cri.go:89] found id: ""
	I0318 13:51:10.571254 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.571267 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:10.571275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:10.571335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:10.608096 1157708 cri.go:89] found id: ""
	I0318 13:51:10.608129 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.608140 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:10.608149 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:10.608217 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:10.649245 1157708 cri.go:89] found id: ""
	I0318 13:51:10.649274 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.649283 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:10.649290 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:10.649365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:10.693462 1157708 cri.go:89] found id: ""
	I0318 13:51:10.693495 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.693506 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:10.693515 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:10.693589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:10.740434 1157708 cri.go:89] found id: ""
	I0318 13:51:10.740464 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.740474 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:10.740480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:10.740543 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:10.781062 1157708 cri.go:89] found id: ""
	I0318 13:51:10.781099 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.781108 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:10.781114 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:10.781167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:10.828480 1157708 cri.go:89] found id: ""
	I0318 13:51:10.828513 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.828524 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:10.828532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:10.828605 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:10.868508 1157708 cri.go:89] found id: ""
	I0318 13:51:10.868535 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.868543 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:10.868553 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:10.868565 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:10.923925 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:10.923961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:10.939254 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:10.939283 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:11.031307 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:11.031334 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:11.031351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:11.121563 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:11.121618 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:10.151278 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.650083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.209877 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.709070 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.308084 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:14.309651 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:16.312985 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:13.681147 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:13.696705 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:13.696812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:13.740904 1157708 cri.go:89] found id: ""
	I0318 13:51:13.740937 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.740949 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:13.740957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:13.741038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:13.779625 1157708 cri.go:89] found id: ""
	I0318 13:51:13.779659 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.779672 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:13.779681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:13.779762 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:13.822183 1157708 cri.go:89] found id: ""
	I0318 13:51:13.822218 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.822231 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:13.822239 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:13.822302 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:13.873686 1157708 cri.go:89] found id: ""
	I0318 13:51:13.873728 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.873741 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:13.873749 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:13.873821 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:13.919772 1157708 cri.go:89] found id: ""
	I0318 13:51:13.919802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.919811 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:13.919817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:13.919874 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:13.958809 1157708 cri.go:89] found id: ""
	I0318 13:51:13.958837 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.958846 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:13.958852 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:13.958928 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:14.000537 1157708 cri.go:89] found id: ""
	I0318 13:51:14.000568 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.000580 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:14.000588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:14.000638 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:14.041234 1157708 cri.go:89] found id: ""
	I0318 13:51:14.041265 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.041275 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:14.041285 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:14.041299 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:14.085435 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:14.085462 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:14.144336 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:14.144374 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:14.159972 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:14.160000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:14.242027 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:14.242048 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:14.242061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:16.821805 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:16.840202 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:16.840272 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:16.898088 1157708 cri.go:89] found id: ""
	I0318 13:51:16.898120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.898129 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:16.898135 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:16.898203 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:16.953180 1157708 cri.go:89] found id: ""
	I0318 13:51:16.953209 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.953221 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:16.953229 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:16.953288 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:17.006995 1157708 cri.go:89] found id: ""
	I0318 13:51:17.007048 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.007062 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:17.007070 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:17.007136 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:17.049756 1157708 cri.go:89] found id: ""
	I0318 13:51:17.049798 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.049809 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:17.049817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:17.049885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:17.092026 1157708 cri.go:89] found id: ""
	I0318 13:51:17.092055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.092066 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:17.092074 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:17.092144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:17.137722 1157708 cri.go:89] found id: ""
	I0318 13:51:17.137756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.137769 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:17.137778 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:17.137875 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:17.180778 1157708 cri.go:89] found id: ""
	I0318 13:51:17.180808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.180816 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:17.180822 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:17.180885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:17.227629 1157708 cri.go:89] found id: ""
	I0318 13:51:17.227664 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.227675 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:17.227688 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:17.227706 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:17.272559 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:17.272588 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:17.333953 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:17.333994 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:17.349765 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:17.349793 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:17.434436 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:17.434465 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:17.434483 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:14.650201 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.151069 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:15.208570 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.210168 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:19.707753 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:18.808252 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.309389 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:20.014314 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:20.031106 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:20.031172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:20.067727 1157708 cri.go:89] found id: ""
	I0318 13:51:20.067753 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.067765 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:20.067773 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:20.067844 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:20.108455 1157708 cri.go:89] found id: ""
	I0318 13:51:20.108482 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.108491 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:20.108497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:20.108563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:20.152257 1157708 cri.go:89] found id: ""
	I0318 13:51:20.152285 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.152310 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:20.152317 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:20.152394 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:20.191480 1157708 cri.go:89] found id: ""
	I0318 13:51:20.191509 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.191520 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:20.191529 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:20.191599 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:20.235677 1157708 cri.go:89] found id: ""
	I0318 13:51:20.235705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.235716 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:20.235723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:20.235796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:20.274794 1157708 cri.go:89] found id: ""
	I0318 13:51:20.274822 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.274833 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:20.274842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:20.274907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:20.321987 1157708 cri.go:89] found id: ""
	I0318 13:51:20.322019 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.322031 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:20.322040 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:20.322097 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:20.361292 1157708 cri.go:89] found id: ""
	I0318 13:51:20.361319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.361328 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:20.361338 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:20.361360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:20.434481 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:20.434509 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:20.434527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:20.518203 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:20.518244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:20.560241 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:20.560271 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:20.615489 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:20.615526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:19.151244 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.151320 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.651849 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.708423 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:24.207976 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.310491 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:25.808443 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.132509 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:23.146447 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:23.146559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:23.189576 1157708 cri.go:89] found id: ""
	I0318 13:51:23.189613 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.189625 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:23.189634 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:23.189688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:23.229700 1157708 cri.go:89] found id: ""
	I0318 13:51:23.229731 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.229740 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:23.229747 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:23.229812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:23.272713 1157708 cri.go:89] found id: ""
	I0318 13:51:23.272747 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.272759 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:23.272768 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:23.272834 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:23.313988 1157708 cri.go:89] found id: ""
	I0318 13:51:23.314014 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.314022 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:23.314028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:23.314087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:23.360195 1157708 cri.go:89] found id: ""
	I0318 13:51:23.360230 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.360243 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:23.360251 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:23.360321 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:23.400657 1157708 cri.go:89] found id: ""
	I0318 13:51:23.400685 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.400694 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:23.400707 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:23.400760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:23.442841 1157708 cri.go:89] found id: ""
	I0318 13:51:23.442873 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.442893 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:23.442900 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:23.442970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:23.483467 1157708 cri.go:89] found id: ""
	I0318 13:51:23.483504 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.483516 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:23.483528 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:23.483545 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:23.538581 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:23.538616 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:23.555392 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:23.555421 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:23.634919 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:23.634945 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:23.634970 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:23.718098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:23.718144 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.270369 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:26.287165 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:26.287232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:26.331773 1157708 cri.go:89] found id: ""
	I0318 13:51:26.331807 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.331832 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:26.331850 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:26.331923 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:26.372067 1157708 cri.go:89] found id: ""
	I0318 13:51:26.372095 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.372102 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:26.372109 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:26.372182 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:26.411883 1157708 cri.go:89] found id: ""
	I0318 13:51:26.411910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.411919 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:26.411924 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:26.411980 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:26.449087 1157708 cri.go:89] found id: ""
	I0318 13:51:26.449122 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.449131 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:26.449137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:26.449188 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:26.492126 1157708 cri.go:89] found id: ""
	I0318 13:51:26.492162 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.492174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:26.492182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:26.492251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:26.529621 1157708 cri.go:89] found id: ""
	I0318 13:51:26.529656 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.529668 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:26.529677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:26.529764 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:26.568853 1157708 cri.go:89] found id: ""
	I0318 13:51:26.568888 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.568899 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:26.568907 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:26.568979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:26.607882 1157708 cri.go:89] found id: ""
	I0318 13:51:26.607917 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.607929 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:26.607942 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:26.607959 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.648736 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:26.648768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:26.704641 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:26.704684 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:26.720681 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:26.720715 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:26.799577 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:26.799608 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:26.799627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:26.152083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.651445 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:26.208160 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.708468 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.309859 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.806690 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:29.389391 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:29.404122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:29.404195 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:29.446761 1157708 cri.go:89] found id: ""
	I0318 13:51:29.446787 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.446796 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:29.446803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:29.446857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:29.483974 1157708 cri.go:89] found id: ""
	I0318 13:51:29.484007 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.484020 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:29.484028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:29.484099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:29.521894 1157708 cri.go:89] found id: ""
	I0318 13:51:29.521922 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.521931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:29.521937 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:29.521993 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:29.562918 1157708 cri.go:89] found id: ""
	I0318 13:51:29.562948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.562957 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:29.562963 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:29.563017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:29.600372 1157708 cri.go:89] found id: ""
	I0318 13:51:29.600412 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.600424 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:29.600432 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:29.600500 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:29.638902 1157708 cri.go:89] found id: ""
	I0318 13:51:29.638933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.638945 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:29.638953 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:29.639019 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:29.679041 1157708 cri.go:89] found id: ""
	I0318 13:51:29.679071 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.679079 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:29.679085 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:29.679142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:29.719168 1157708 cri.go:89] found id: ""
	I0318 13:51:29.719201 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.719213 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:29.719224 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:29.719244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:29.764050 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:29.764077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:29.822136 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:29.822174 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:29.839485 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:29.839515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:29.914984 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:29.915006 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:29.915023 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:32.497388 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:32.512151 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:32.512215 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:32.549566 1157708 cri.go:89] found id: ""
	I0318 13:51:32.549602 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.549614 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:32.549623 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:32.549693 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:32.588516 1157708 cri.go:89] found id: ""
	I0318 13:51:32.588546 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.588555 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:32.588562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:32.588615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:32.628425 1157708 cri.go:89] found id: ""
	I0318 13:51:32.628453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.628462 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:32.628470 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:32.628546 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:32.670851 1157708 cri.go:89] found id: ""
	I0318 13:51:32.670874 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.670888 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:32.670895 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:32.670944 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:32.709614 1157708 cri.go:89] found id: ""
	I0318 13:51:32.709642 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.709656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:32.709666 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:32.709738 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:32.749774 1157708 cri.go:89] found id: ""
	I0318 13:51:32.749808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.749819 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:32.749828 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:32.749896 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:32.789502 1157708 cri.go:89] found id: ""
	I0318 13:51:32.789525 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.789534 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:32.789540 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:32.789589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:32.834926 1157708 cri.go:89] found id: ""
	I0318 13:51:32.834948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.834956 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:32.834965 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:32.834980 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:32.887365 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:32.887404 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:32.903584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:32.903610 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:32.978924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:32.978958 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:32.978988 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:31.151276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.651395 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.709136 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.709549 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.808076 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.308827 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.055386 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:33.055424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:35.603881 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:35.618083 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:35.618167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:35.659760 1157708 cri.go:89] found id: ""
	I0318 13:51:35.659802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.659814 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:35.659820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:35.659881 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:35.703521 1157708 cri.go:89] found id: ""
	I0318 13:51:35.703570 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.703582 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:35.703589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:35.703651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:35.744411 1157708 cri.go:89] found id: ""
	I0318 13:51:35.744444 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.744455 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:35.744463 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:35.744548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:35.783704 1157708 cri.go:89] found id: ""
	I0318 13:51:35.783735 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.783746 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:35.783754 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:35.783819 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:35.824000 1157708 cri.go:89] found id: ""
	I0318 13:51:35.824031 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.824042 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:35.824049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:35.824117 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:35.860260 1157708 cri.go:89] found id: ""
	I0318 13:51:35.860289 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.860299 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:35.860308 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:35.860388 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:35.895154 1157708 cri.go:89] found id: ""
	I0318 13:51:35.895189 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.895201 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:35.895209 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:35.895276 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:35.936916 1157708 cri.go:89] found id: ""
	I0318 13:51:35.936942 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.936951 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:35.936961 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:35.936977 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:35.951715 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:35.951745 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:36.027431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:36.027457 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:36.027474 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:36.113339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:36.113386 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:36.160132 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:36.160170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:36.151331 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.650891 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.208500 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.209692 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.709776 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.807423 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.809226 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.711710 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:38.726104 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:38.726162 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:38.763251 1157708 cri.go:89] found id: ""
	I0318 13:51:38.763281 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.763291 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:38.763300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:38.763364 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:38.802521 1157708 cri.go:89] found id: ""
	I0318 13:51:38.802548 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.802556 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:38.802562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:38.802616 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:38.843778 1157708 cri.go:89] found id: ""
	I0318 13:51:38.843817 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.843831 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:38.843839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:38.843909 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:38.884966 1157708 cri.go:89] found id: ""
	I0318 13:51:38.885003 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.885015 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:38.885024 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:38.885090 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:38.925653 1157708 cri.go:89] found id: ""
	I0318 13:51:38.925681 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.925690 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:38.925696 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:38.925757 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:38.964126 1157708 cri.go:89] found id: ""
	I0318 13:51:38.964156 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.964169 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:38.964177 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:38.964228 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:39.004864 1157708 cri.go:89] found id: ""
	I0318 13:51:39.004898 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.004910 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:39.004919 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:39.004991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:39.041555 1157708 cri.go:89] found id: ""
	I0318 13:51:39.041588 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.041600 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:39.041611 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:39.041626 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:39.092984 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:39.093019 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:39.110492 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:39.110526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:39.186785 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:39.186848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:39.186872 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:39.272847 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:39.272891 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:41.829404 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:41.843407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:41.843479 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:41.883129 1157708 cri.go:89] found id: ""
	I0318 13:51:41.883164 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.883175 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:41.883184 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:41.883246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:41.924083 1157708 cri.go:89] found id: ""
	I0318 13:51:41.924123 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.924136 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:41.924144 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:41.924209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:41.963029 1157708 cri.go:89] found id: ""
	I0318 13:51:41.963058 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.963069 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:41.963084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:41.963155 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:42.003393 1157708 cri.go:89] found id: ""
	I0318 13:51:42.003430 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.003442 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:42.003450 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:42.003511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:42.041938 1157708 cri.go:89] found id: ""
	I0318 13:51:42.041968 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.041977 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:42.041983 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:42.042044 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:42.079685 1157708 cri.go:89] found id: ""
	I0318 13:51:42.079718 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.079731 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:42.079740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:42.079805 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:42.118112 1157708 cri.go:89] found id: ""
	I0318 13:51:42.118144 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.118156 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:42.118164 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:42.118230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:42.157287 1157708 cri.go:89] found id: ""
	I0318 13:51:42.157319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.157331 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:42.157343 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:42.157360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:42.213006 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:42.213038 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:42.228452 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:42.228481 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:42.302523 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:42.302545 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:42.302558 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:42.387994 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:42.388062 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:40.651272 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:43.151009 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.208825 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.211676 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.310765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.313778 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.934501 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:44.949163 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:44.949245 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:44.991885 1157708 cri.go:89] found id: ""
	I0318 13:51:44.991914 1157708 logs.go:276] 0 containers: []
	W0318 13:51:44.991924 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:44.991931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:44.992008 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:45.029868 1157708 cri.go:89] found id: ""
	I0318 13:51:45.029904 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.029915 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:45.029922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:45.030017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:45.067755 1157708 cri.go:89] found id: ""
	I0318 13:51:45.067785 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.067794 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:45.067803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:45.067857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:45.106296 1157708 cri.go:89] found id: ""
	I0318 13:51:45.106323 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.106333 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:45.106339 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:45.106405 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:45.145746 1157708 cri.go:89] found id: ""
	I0318 13:51:45.145784 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.145797 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:45.145805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:45.145868 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:45.191960 1157708 cri.go:89] found id: ""
	I0318 13:51:45.191998 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.192010 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:45.192019 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:45.192089 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:45.231436 1157708 cri.go:89] found id: ""
	I0318 13:51:45.231470 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.231483 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:45.231491 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:45.231559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:45.274521 1157708 cri.go:89] found id: ""
	I0318 13:51:45.274554 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.274565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:45.274577 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:45.274595 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:45.338539 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:45.338580 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:45.353917 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:45.353947 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:45.447734 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:45.447755 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:45.447768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:45.530098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:45.530140 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:45.653161 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.150841 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.708808 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.209076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.808315 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.311406 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.077992 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:48.092203 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:48.092273 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:48.133136 1157708 cri.go:89] found id: ""
	I0318 13:51:48.133172 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.133183 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:48.133191 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:48.133259 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:48.177727 1157708 cri.go:89] found id: ""
	I0318 13:51:48.177756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.177768 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:48.177775 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:48.177843 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:48.217574 1157708 cri.go:89] found id: ""
	I0318 13:51:48.217600 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.217608 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:48.217614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:48.217676 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:48.258900 1157708 cri.go:89] found id: ""
	I0318 13:51:48.258933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.258947 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:48.258955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:48.259046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:48.299527 1157708 cri.go:89] found id: ""
	I0318 13:51:48.299562 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.299573 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:48.299581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:48.299650 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:48.339692 1157708 cri.go:89] found id: ""
	I0318 13:51:48.339723 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.339732 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:48.339740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:48.339791 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:48.378737 1157708 cri.go:89] found id: ""
	I0318 13:51:48.378764 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.378773 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:48.378779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:48.378841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:48.414593 1157708 cri.go:89] found id: ""
	I0318 13:51:48.414621 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.414629 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:48.414639 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:48.414654 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:48.430232 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:48.430264 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:48.513313 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:48.513335 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:48.513353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:48.594681 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:48.594721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:48.638681 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:48.638720 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.189510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:51.204296 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:51.204383 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:51.248285 1157708 cri.go:89] found id: ""
	I0318 13:51:51.248311 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.248331 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:51.248340 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:51.248414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:51.289022 1157708 cri.go:89] found id: ""
	I0318 13:51:51.289055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.289068 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:51.289077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:51.289144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:51.329367 1157708 cri.go:89] found id: ""
	I0318 13:51:51.329405 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.329414 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:51.329420 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:51.329477 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:51.370909 1157708 cri.go:89] found id: ""
	I0318 13:51:51.370948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.370960 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:51.370970 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:51.371043 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:51.419447 1157708 cri.go:89] found id: ""
	I0318 13:51:51.419486 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.419498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:51.419506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:51.419573 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:51.466302 1157708 cri.go:89] found id: ""
	I0318 13:51:51.466336 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.466348 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:51.466356 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:51.466441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:51.505593 1157708 cri.go:89] found id: ""
	I0318 13:51:51.505631 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.505644 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:51.505652 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:51.505724 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:51.543815 1157708 cri.go:89] found id: ""
	I0318 13:51:51.543843 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.543852 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:51.543863 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:51.543885 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.596271 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:51.596305 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:51.612441 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:51.612477 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:51.690591 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:51.690614 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:51.690631 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:51.771781 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:51.771821 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:50.650088 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:52.650307 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.710583 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.208629 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.808743 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.309915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.319626 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:54.334041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:54.334113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:54.372090 1157708 cri.go:89] found id: ""
	I0318 13:51:54.372120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.372132 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:54.372139 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:54.372196 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:54.412513 1157708 cri.go:89] found id: ""
	I0318 13:51:54.412567 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.412580 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:54.412588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:54.412662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:54.453143 1157708 cri.go:89] found id: ""
	I0318 13:51:54.453176 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.453188 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:54.453196 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:54.453262 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:54.497908 1157708 cri.go:89] found id: ""
	I0318 13:51:54.497940 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.497949 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:54.497957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:54.498025 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:54.539044 1157708 cri.go:89] found id: ""
	I0318 13:51:54.539072 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.539081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:54.539086 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:54.539151 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:54.578916 1157708 cri.go:89] found id: ""
	I0318 13:51:54.578944 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.578951 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:54.578958 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:54.579027 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:54.617339 1157708 cri.go:89] found id: ""
	I0318 13:51:54.617366 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.617375 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:54.617380 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:54.617436 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:54.661288 1157708 cri.go:89] found id: ""
	I0318 13:51:54.661309 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.661318 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:54.661328 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:54.661344 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:54.740710 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:54.740751 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:54.789136 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:54.789176 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.844585 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:54.844627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:54.860304 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:54.860351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:54.945305 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:57.445800 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:57.459294 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:57.459368 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:57.497411 1157708 cri.go:89] found id: ""
	I0318 13:51:57.497441 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.497449 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:57.497456 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:57.497521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:57.535629 1157708 cri.go:89] found id: ""
	I0318 13:51:57.535663 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.535675 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:57.535684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:57.535749 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:57.572980 1157708 cri.go:89] found id: ""
	I0318 13:51:57.573008 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.573017 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:57.573023 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:57.573071 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:57.622949 1157708 cri.go:89] found id: ""
	I0318 13:51:57.622984 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.622997 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:57.623005 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:57.623070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:57.659877 1157708 cri.go:89] found id: ""
	I0318 13:51:57.659910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.659921 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:57.659928 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:57.659991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:57.705399 1157708 cri.go:89] found id: ""
	I0318 13:51:57.705481 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.705495 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:57.705504 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:57.705566 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:57.748035 1157708 cri.go:89] found id: ""
	I0318 13:51:57.748062 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.748073 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:57.748084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:57.748144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:57.801942 1157708 cri.go:89] found id: ""
	I0318 13:51:57.801976 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.801987 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:57.801999 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:57.802017 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:57.900157 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:57.900204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:57.946179 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:57.946219 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.651363 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:57.151268 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.208925 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.708089 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.807605 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.808479 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.307740 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.000369 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:58.000412 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:58.016179 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:58.016211 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:58.101766 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:00.602151 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:00.617466 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:00.617531 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:00.661294 1157708 cri.go:89] found id: ""
	I0318 13:52:00.661328 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.661336 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:00.661342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:00.661400 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:00.706227 1157708 cri.go:89] found id: ""
	I0318 13:52:00.706257 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.706267 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:00.706275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:00.706342 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:00.746482 1157708 cri.go:89] found id: ""
	I0318 13:52:00.746515 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.746528 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:00.746536 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:00.746600 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:00.789242 1157708 cri.go:89] found id: ""
	I0318 13:52:00.789272 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.789281 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:00.789287 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:00.789348 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:00.832463 1157708 cri.go:89] found id: ""
	I0318 13:52:00.832503 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.832514 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:00.832522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:00.832581 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:00.869790 1157708 cri.go:89] found id: ""
	I0318 13:52:00.869819 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.869830 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:00.869839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:00.869904 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:00.909656 1157708 cri.go:89] found id: ""
	I0318 13:52:00.909685 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.909693 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:00.909700 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:00.909754 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:00.953818 1157708 cri.go:89] found id: ""
	I0318 13:52:00.953856 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.953868 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:00.953882 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:00.953898 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:01.032822 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:01.032848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:01.032865 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:01.111701 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:01.111747 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:01.168270 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:01.168300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:01.220376 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:01.220408 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:59.650359 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.650627 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.651830 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:00.709561 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.207829 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.808915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:06.307915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.737354 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:03.756282 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:03.756382 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:03.804716 1157708 cri.go:89] found id: ""
	I0318 13:52:03.804757 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.804768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:03.804777 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:03.804838 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:03.864559 1157708 cri.go:89] found id: ""
	I0318 13:52:03.864596 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.864609 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:03.864617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:03.864687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:03.918397 1157708 cri.go:89] found id: ""
	I0318 13:52:03.918425 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.918433 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:03.918439 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:03.918504 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:03.961729 1157708 cri.go:89] found id: ""
	I0318 13:52:03.961762 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.961773 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:03.961780 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:03.961856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:04.006261 1157708 cri.go:89] found id: ""
	I0318 13:52:04.006299 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.006311 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:04.006319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:04.006404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:04.050284 1157708 cri.go:89] found id: ""
	I0318 13:52:04.050313 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.050321 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:04.050327 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:04.050384 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:04.093789 1157708 cri.go:89] found id: ""
	I0318 13:52:04.093827 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.093839 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:04.093847 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:04.093916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:04.135047 1157708 cri.go:89] found id: ""
	I0318 13:52:04.135091 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.135110 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:04.135124 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:04.135142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:04.192899 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:04.192937 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:04.209080 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:04.209130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:04.286388 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:04.286413 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:04.286428 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:04.371836 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:04.371877 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:06.923039 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:06.938743 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:06.938826 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:06.984600 1157708 cri.go:89] found id: ""
	I0318 13:52:06.984634 1157708 logs.go:276] 0 containers: []
	W0318 13:52:06.984646 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:06.984655 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:06.984721 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:07.023849 1157708 cri.go:89] found id: ""
	I0318 13:52:07.023891 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.023914 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:07.023922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:07.023984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:07.071972 1157708 cri.go:89] found id: ""
	I0318 13:52:07.072002 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.072015 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:07.072022 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:07.072087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:07.109070 1157708 cri.go:89] found id: ""
	I0318 13:52:07.109105 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.109118 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:07.109126 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:07.109183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:07.149879 1157708 cri.go:89] found id: ""
	I0318 13:52:07.149910 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.149918 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:07.149925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:07.149990 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:07.195946 1157708 cri.go:89] found id: ""
	I0318 13:52:07.195976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.195987 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:07.195995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:07.196062 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:07.238126 1157708 cri.go:89] found id: ""
	I0318 13:52:07.238152 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.238162 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:07.238168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:07.238233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:07.278218 1157708 cri.go:89] found id: ""
	I0318 13:52:07.278255 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.278268 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:07.278282 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:07.278300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:07.294926 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:07.294955 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:07.383431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:07.383455 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:07.383468 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:07.467306 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:07.467348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:07.515996 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:07.516028 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:06.151546 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.162392 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:05.208765 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:07.210243 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:09.708076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.309045 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.807773 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.071945 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:10.088587 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:10.088654 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:10.130528 1157708 cri.go:89] found id: ""
	I0318 13:52:10.130566 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.130579 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:10.130588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:10.130663 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:10.173113 1157708 cri.go:89] found id: ""
	I0318 13:52:10.173150 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.173168 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:10.173178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:10.173243 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:10.218941 1157708 cri.go:89] found id: ""
	I0318 13:52:10.218976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.218987 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:10.218996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:10.219068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:10.262331 1157708 cri.go:89] found id: ""
	I0318 13:52:10.262368 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.262381 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:10.262389 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:10.262460 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:10.303329 1157708 cri.go:89] found id: ""
	I0318 13:52:10.303363 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.303378 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:10.303386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:10.303457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:10.344458 1157708 cri.go:89] found id: ""
	I0318 13:52:10.344486 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.344497 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:10.344505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:10.344567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:10.386753 1157708 cri.go:89] found id: ""
	I0318 13:52:10.386786 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.386797 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:10.386806 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:10.386876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:10.425922 1157708 cri.go:89] found id: ""
	I0318 13:52:10.425954 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.425965 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:10.425978 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:10.426000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:10.441134 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:10.441168 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:10.514865 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:10.514899 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:10.514916 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:10.592061 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:10.592105 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:10.642900 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:10.642935 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:10.651432 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.150537 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.208498 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:14.209684 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.808250 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:15.308639 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.199176 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:13.215155 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:13.215232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:13.256107 1157708 cri.go:89] found id: ""
	I0318 13:52:13.256139 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.256151 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:13.256160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:13.256231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:13.296562 1157708 cri.go:89] found id: ""
	I0318 13:52:13.296597 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.296608 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:13.296615 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:13.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:13.336633 1157708 cri.go:89] found id: ""
	I0318 13:52:13.336662 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.336672 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:13.336678 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:13.336737 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:13.382597 1157708 cri.go:89] found id: ""
	I0318 13:52:13.382639 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.382654 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:13.382663 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:13.382733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:13.430257 1157708 cri.go:89] found id: ""
	I0318 13:52:13.430292 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.430304 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:13.430312 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:13.430373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:13.466854 1157708 cri.go:89] found id: ""
	I0318 13:52:13.466881 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.466889 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:13.466896 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:13.466945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:13.510297 1157708 cri.go:89] found id: ""
	I0318 13:52:13.510333 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.510344 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:13.510352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:13.510420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:13.551476 1157708 cri.go:89] found id: ""
	I0318 13:52:13.551508 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.551517 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:13.551528 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:13.551542 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:13.634561 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:13.634585 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:13.634598 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:13.720088 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:13.720129 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:13.760621 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:13.760659 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:13.817311 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:13.817350 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.334094 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:16.349779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:16.349866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:16.394131 1157708 cri.go:89] found id: ""
	I0318 13:52:16.394157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.394167 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:16.394175 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:16.394239 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:16.438185 1157708 cri.go:89] found id: ""
	I0318 13:52:16.438232 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.438245 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:16.438264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:16.438335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:16.476872 1157708 cri.go:89] found id: ""
	I0318 13:52:16.476920 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.476932 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:16.476939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:16.477007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:16.518226 1157708 cri.go:89] found id: ""
	I0318 13:52:16.518253 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.518262 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:16.518269 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:16.518327 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:16.559119 1157708 cri.go:89] found id: ""
	I0318 13:52:16.559160 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.559174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:16.559182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:16.559260 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:16.600050 1157708 cri.go:89] found id: ""
	I0318 13:52:16.600079 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.600088 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:16.600094 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:16.600160 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:16.640621 1157708 cri.go:89] found id: ""
	I0318 13:52:16.640649 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.640660 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:16.640668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:16.640733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:16.680541 1157708 cri.go:89] found id: ""
	I0318 13:52:16.680571 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.680580 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:16.680590 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:16.680602 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:16.766378 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:16.766415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:16.811846 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:16.811883 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:16.871940 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:16.871981 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.887494 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:16.887521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:16.961924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:15.650599 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.650902 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:16.710336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.207426 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.807338 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.809418 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.462316 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:19.478819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:19.478885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:19.523280 1157708 cri.go:89] found id: ""
	I0318 13:52:19.523314 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.523334 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:19.523342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:19.523417 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:19.560675 1157708 cri.go:89] found id: ""
	I0318 13:52:19.560708 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.560717 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:19.560725 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:19.560790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:19.598739 1157708 cri.go:89] found id: ""
	I0318 13:52:19.598766 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.598773 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:19.598781 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:19.598846 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:19.639928 1157708 cri.go:89] found id: ""
	I0318 13:52:19.639960 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.639969 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:19.639975 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:19.640030 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:19.686084 1157708 cri.go:89] found id: ""
	I0318 13:52:19.686134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.686153 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:19.686160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:19.686231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:19.725449 1157708 cri.go:89] found id: ""
	I0318 13:52:19.725481 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.725491 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:19.725497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:19.725559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:19.763855 1157708 cri.go:89] found id: ""
	I0318 13:52:19.763886 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.763897 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:19.763905 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:19.763976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:19.805783 1157708 cri.go:89] found id: ""
	I0318 13:52:19.805813 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.805824 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:19.805836 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:19.805852 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.883873 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:19.883914 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:19.926368 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:19.926406 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:19.981137 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:19.981181 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:19.996242 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:19.996269 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:20.077880 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:22.578045 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:22.594170 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:22.594247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:22.637241 1157708 cri.go:89] found id: ""
	I0318 13:52:22.637276 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.637289 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:22.637298 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:22.637363 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:22.679877 1157708 cri.go:89] found id: ""
	I0318 13:52:22.679904 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.679912 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:22.679918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:22.679981 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:22.721865 1157708 cri.go:89] found id: ""
	I0318 13:52:22.721890 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.721903 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:22.721912 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:22.721982 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:22.763208 1157708 cri.go:89] found id: ""
	I0318 13:52:22.763242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.763255 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:22.763264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:22.763329 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:22.802038 1157708 cri.go:89] found id: ""
	I0318 13:52:22.802071 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.802081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:22.802089 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:22.802170 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:22.841206 1157708 cri.go:89] found id: ""
	I0318 13:52:22.841242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.841254 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:22.841263 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:22.841328 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:22.885159 1157708 cri.go:89] found id: ""
	I0318 13:52:22.885197 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.885209 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:22.885218 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:22.885289 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:22.925346 1157708 cri.go:89] found id: ""
	I0318 13:52:22.925373 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.925382 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:22.925391 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:22.925407 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.654611 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.152365 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:21.208979 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.210660 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.308290 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:24.310006 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.006158 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:23.006193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:23.053932 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:23.053961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:23.107728 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:23.107768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:23.125708 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:23.125740 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:23.202609 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:25.703096 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:25.718617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:25.718689 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:25.756504 1157708 cri.go:89] found id: ""
	I0318 13:52:25.756530 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.756538 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:25.756544 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:25.756608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:25.795103 1157708 cri.go:89] found id: ""
	I0318 13:52:25.795140 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.795152 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:25.795160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:25.795240 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:25.839908 1157708 cri.go:89] found id: ""
	I0318 13:52:25.839945 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.839957 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:25.839971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:25.840038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:25.881677 1157708 cri.go:89] found id: ""
	I0318 13:52:25.881711 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.881723 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:25.881732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:25.881802 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:25.923356 1157708 cri.go:89] found id: ""
	I0318 13:52:25.923386 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.923397 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:25.923410 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:25.923469 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:25.961661 1157708 cri.go:89] found id: ""
	I0318 13:52:25.961693 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.961705 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:25.961713 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:25.961785 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:26.003198 1157708 cri.go:89] found id: ""
	I0318 13:52:26.003236 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.003248 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:26.003256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:26.003319 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:26.041436 1157708 cri.go:89] found id: ""
	I0318 13:52:26.041471 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.041483 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:26.041496 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:26.041515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:26.056679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:26.056716 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:26.143900 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:26.143926 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:26.143946 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:26.226929 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:26.226964 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:26.288519 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:26.288560 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:24.652661 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.152317 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:25.708488 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.708931 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:26.807624 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.809030 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.308980 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.846205 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:28.861117 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:28.861190 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:28.906990 1157708 cri.go:89] found id: ""
	I0318 13:52:28.907022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.907030 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:28.907036 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:28.907099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:28.946271 1157708 cri.go:89] found id: ""
	I0318 13:52:28.946309 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.946322 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:28.946332 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:28.946403 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:28.990158 1157708 cri.go:89] found id: ""
	I0318 13:52:28.990185 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.990193 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:28.990199 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:28.990251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:29.035089 1157708 cri.go:89] found id: ""
	I0318 13:52:29.035123 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.035134 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:29.035143 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:29.035209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:29.076991 1157708 cri.go:89] found id: ""
	I0318 13:52:29.077022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.077033 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:29.077041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:29.077104 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:29.117106 1157708 cri.go:89] found id: ""
	I0318 13:52:29.117134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.117150 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:29.117157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:29.117209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:29.159675 1157708 cri.go:89] found id: ""
	I0318 13:52:29.159704 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.159714 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:29.159722 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:29.159787 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:29.202130 1157708 cri.go:89] found id: ""
	I0318 13:52:29.202157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.202166 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:29.202176 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:29.202189 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:29.258343 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:29.258390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:29.275314 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:29.275360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:29.359842 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:29.359989 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:29.360036 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:29.446021 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:29.446072 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:31.990431 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:32.007443 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:32.007508 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:32.051028 1157708 cri.go:89] found id: ""
	I0318 13:52:32.051061 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.051070 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:32.051076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:32.051144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:32.092914 1157708 cri.go:89] found id: ""
	I0318 13:52:32.092950 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.092962 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:32.092972 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:32.093045 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:32.154257 1157708 cri.go:89] found id: ""
	I0318 13:52:32.154291 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.154302 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:32.154309 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:32.154375 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:32.200185 1157708 cri.go:89] found id: ""
	I0318 13:52:32.200224 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.200236 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:32.200244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:32.200309 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:32.248927 1157708 cri.go:89] found id: ""
	I0318 13:52:32.248961 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.248974 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:32.248982 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:32.249051 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:32.289829 1157708 cri.go:89] found id: ""
	I0318 13:52:32.289861 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.289870 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:32.289876 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:32.289934 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:32.334346 1157708 cri.go:89] found id: ""
	I0318 13:52:32.334379 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.334387 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:32.334393 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:32.334457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:32.378718 1157708 cri.go:89] found id: ""
	I0318 13:52:32.378761 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.378770 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:32.378780 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:32.378795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:32.434626 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:32.434667 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:32.451366 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:32.451402 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:32.532868 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:32.532907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:32.532924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:32.617556 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:32.617597 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:29.650409 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.651019 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:30.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:32.214101 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:34.710602 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:33.807499 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.807738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.165067 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:35.181325 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:35.181404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:35.220570 1157708 cri.go:89] found id: ""
	I0318 13:52:35.220601 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.220612 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:35.220619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:35.220684 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:35.263798 1157708 cri.go:89] found id: ""
	I0318 13:52:35.263830 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.263841 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:35.263848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:35.263915 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:35.309447 1157708 cri.go:89] found id: ""
	I0318 13:52:35.309477 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.309489 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:35.309497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:35.309567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:35.353444 1157708 cri.go:89] found id: ""
	I0318 13:52:35.353472 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.353484 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:35.353493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:35.353556 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:35.394563 1157708 cri.go:89] found id: ""
	I0318 13:52:35.394591 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.394599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:35.394604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:35.394662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:35.433866 1157708 cri.go:89] found id: ""
	I0318 13:52:35.433899 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.433908 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:35.433915 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:35.433970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:35.482769 1157708 cri.go:89] found id: ""
	I0318 13:52:35.482808 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.482820 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:35.482829 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:35.482899 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:35.521465 1157708 cri.go:89] found id: ""
	I0318 13:52:35.521498 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.521509 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:35.521520 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:35.521534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:35.577759 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:35.577799 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:35.593052 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:35.593084 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:35.672751 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:35.672773 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:35.672787 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:35.752118 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:35.752171 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:34.157429 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:36.650725 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.652096 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:37.209435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:39.710020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.312679 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:40.807379 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.296677 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:38.312261 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:38.312365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:38.350328 1157708 cri.go:89] found id: ""
	I0318 13:52:38.350362 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.350374 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:38.350382 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:38.350457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:38.389891 1157708 cri.go:89] found id: ""
	I0318 13:52:38.389927 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.389939 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:38.389947 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:38.390005 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:38.430268 1157708 cri.go:89] found id: ""
	I0318 13:52:38.430296 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.430305 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:38.430311 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:38.430365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:38.470830 1157708 cri.go:89] found id: ""
	I0318 13:52:38.470859 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.470873 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:38.470880 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:38.470945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:38.510501 1157708 cri.go:89] found id: ""
	I0318 13:52:38.510538 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.510552 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:38.510560 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:38.510618 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:38.594899 1157708 cri.go:89] found id: ""
	I0318 13:52:38.594926 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.594935 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:38.594942 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:38.595021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:38.649095 1157708 cri.go:89] found id: ""
	I0318 13:52:38.649121 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.649129 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:38.649136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:38.649192 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:38.695263 1157708 cri.go:89] found id: ""
	I0318 13:52:38.695295 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.695307 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:38.695320 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:38.695336 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:38.780624 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:38.780666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:38.825294 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:38.825335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:38.877548 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:38.877596 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:38.893289 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:38.893319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:38.971752 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.472865 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:41.487371 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:41.487484 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:41.524691 1157708 cri.go:89] found id: ""
	I0318 13:52:41.524724 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.524737 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:41.524746 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:41.524812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:41.564094 1157708 cri.go:89] found id: ""
	I0318 13:52:41.564125 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.564137 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:41.564145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:41.564210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:41.600019 1157708 cri.go:89] found id: ""
	I0318 13:52:41.600047 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.600058 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:41.600064 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:41.600142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:41.638320 1157708 cri.go:89] found id: ""
	I0318 13:52:41.638350 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.638363 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:41.638372 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:41.638438 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:41.680763 1157708 cri.go:89] found id: ""
	I0318 13:52:41.680798 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.680810 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:41.680818 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:41.680894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:41.720645 1157708 cri.go:89] found id: ""
	I0318 13:52:41.720674 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.720683 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:41.720690 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:41.720741 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:41.759121 1157708 cri.go:89] found id: ""
	I0318 13:52:41.759151 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.759185 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:41.759195 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:41.759264 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:41.797006 1157708 cri.go:89] found id: ""
	I0318 13:52:41.797034 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.797043 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:41.797053 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:41.797070 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:41.853315 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:41.853353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:41.869920 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:41.869952 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:41.947187 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.947219 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:41.947235 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:42.025475 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:42.025515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:41.151466 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.153616 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:42.207999 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.709760 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.310812 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:45.808394 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.574724 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:44.598990 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:44.599068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:44.649051 1157708 cri.go:89] found id: ""
	I0318 13:52:44.649137 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.649168 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:44.649180 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:44.649254 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:44.686423 1157708 cri.go:89] found id: ""
	I0318 13:52:44.686459 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.686468 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:44.686473 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:44.686536 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:44.726534 1157708 cri.go:89] found id: ""
	I0318 13:52:44.726564 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.726575 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:44.726583 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:44.726653 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:44.771190 1157708 cri.go:89] found id: ""
	I0318 13:52:44.771220 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.771232 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:44.771240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:44.771311 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:44.811577 1157708 cri.go:89] found id: ""
	I0318 13:52:44.811602 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.811611 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:44.811618 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:44.811677 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:44.850717 1157708 cri.go:89] found id: ""
	I0318 13:52:44.850744 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.850756 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:44.850765 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:44.850824 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:44.890294 1157708 cri.go:89] found id: ""
	I0318 13:52:44.890321 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.890330 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:44.890344 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:44.890401 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:44.930690 1157708 cri.go:89] found id: ""
	I0318 13:52:44.930720 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.930730 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:44.930741 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:44.930757 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:44.946509 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:44.946544 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:45.029748 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:45.029777 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:45.029795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:45.111348 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:45.111392 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:45.165156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:45.165193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:47.720701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:47.734457 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:47.734520 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:47.771273 1157708 cri.go:89] found id: ""
	I0318 13:52:47.771304 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.771313 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:47.771319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:47.771370 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:47.813779 1157708 cri.go:89] found id: ""
	I0318 13:52:47.813806 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.813816 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:47.813824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:47.813892 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:47.855547 1157708 cri.go:89] found id: ""
	I0318 13:52:47.855576 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.855584 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:47.855590 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:47.855640 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:47.892651 1157708 cri.go:89] found id: ""
	I0318 13:52:47.892684 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.892692 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:47.892697 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:47.892752 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:47.935457 1157708 cri.go:89] found id: ""
	I0318 13:52:47.935488 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.935498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:47.935505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:47.935567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:47.969335 1157708 cri.go:89] found id: ""
	I0318 13:52:47.969361 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.969370 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:47.969377 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:47.969441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:45.651171 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.151833 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:47.209014 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:49.710231 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.310467 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:50.807495 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.007305 1157708 cri.go:89] found id: ""
	I0318 13:52:48.007339 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.007349 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:48.007355 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:48.007416 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:48.050230 1157708 cri.go:89] found id: ""
	I0318 13:52:48.050264 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.050276 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:48.050289 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:48.050304 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:48.106946 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:48.106993 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:48.123805 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:48.123837 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:48.201881 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:48.201907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:48.201920 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:48.281533 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:48.281577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:50.829561 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:50.847462 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:50.847555 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:50.889731 1157708 cri.go:89] found id: ""
	I0318 13:52:50.889759 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.889768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:50.889774 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:50.889831 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:50.928176 1157708 cri.go:89] found id: ""
	I0318 13:52:50.928210 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.928222 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:50.928231 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:50.928294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:50.965737 1157708 cri.go:89] found id: ""
	I0318 13:52:50.965772 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.965786 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:50.965794 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:50.965866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:51.008038 1157708 cri.go:89] found id: ""
	I0318 13:52:51.008072 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.008081 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:51.008087 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:51.008159 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:51.050310 1157708 cri.go:89] found id: ""
	I0318 13:52:51.050340 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.050355 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:51.050363 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:51.050431 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:51.090514 1157708 cri.go:89] found id: ""
	I0318 13:52:51.090541 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.090550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:51.090556 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:51.090608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:51.131278 1157708 cri.go:89] found id: ""
	I0318 13:52:51.131305 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.131313 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:51.131320 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:51.131381 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:51.173370 1157708 cri.go:89] found id: ""
	I0318 13:52:51.173400 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.173411 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:51.173437 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:51.173464 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:51.260155 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:51.260204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:51.309963 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:51.309998 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:51.367838 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:51.367889 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:51.382542 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:51.382570 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:51.459258 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:50.650524 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.651804 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.208655 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:54.209701 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.808292 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:55.309417 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:53.960212 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:53.978939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:53.979004 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:54.030003 1157708 cri.go:89] found id: ""
	I0318 13:52:54.030038 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.030052 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:54.030060 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:54.030134 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:54.073487 1157708 cri.go:89] found id: ""
	I0318 13:52:54.073523 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.073535 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:54.073543 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:54.073611 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:54.115982 1157708 cri.go:89] found id: ""
	I0318 13:52:54.116010 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.116022 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:54.116029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:54.116099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:54.158320 1157708 cri.go:89] found id: ""
	I0318 13:52:54.158348 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.158359 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:54.158366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:54.158433 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:54.198911 1157708 cri.go:89] found id: ""
	I0318 13:52:54.198939 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.198948 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:54.198955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:54.199010 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:54.240628 1157708 cri.go:89] found id: ""
	I0318 13:52:54.240659 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.240671 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:54.240679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:54.240750 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:54.279377 1157708 cri.go:89] found id: ""
	I0318 13:52:54.279409 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.279418 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:54.279424 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:54.279493 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:54.324160 1157708 cri.go:89] found id: ""
	I0318 13:52:54.324192 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.324205 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:54.324218 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:54.324237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:54.371487 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:54.371527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:54.423487 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:54.423526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:54.438773 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:54.438800 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:54.518788 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:54.518810 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:54.518825 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.103590 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:57.118866 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:57.118932 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:57.159354 1157708 cri.go:89] found id: ""
	I0318 13:52:57.159383 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.159393 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:57.159399 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:57.159458 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:57.201114 1157708 cri.go:89] found id: ""
	I0318 13:52:57.201148 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.201159 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:57.201167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:57.201233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:57.242172 1157708 cri.go:89] found id: ""
	I0318 13:52:57.242207 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.242217 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:57.242224 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:57.242287 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:57.282578 1157708 cri.go:89] found id: ""
	I0318 13:52:57.282617 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.282629 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:57.282637 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:57.282706 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:57.323682 1157708 cri.go:89] found id: ""
	I0318 13:52:57.323707 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.323715 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:57.323721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:57.323771 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:57.364946 1157708 cri.go:89] found id: ""
	I0318 13:52:57.364980 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.364991 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:57.365003 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:57.365076 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:57.407466 1157708 cri.go:89] found id: ""
	I0318 13:52:57.407495 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.407505 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:57.407511 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:57.407568 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:57.454663 1157708 cri.go:89] found id: ""
	I0318 13:52:57.454692 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.454701 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:57.454710 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:57.454722 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:57.509591 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:57.509633 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:57.525125 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:57.525155 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:57.602819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:57.602845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:57.602863 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.689001 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:57.689045 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:55.150589 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.152149 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:56.708493 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.208099 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.311780 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.312048 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:00.234252 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:00.249526 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:00.249615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:00.290131 1157708 cri.go:89] found id: ""
	I0318 13:53:00.290160 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.290171 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:00.290178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:00.290230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:00.337794 1157708 cri.go:89] found id: ""
	I0318 13:53:00.337828 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.337840 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:00.337848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:00.337907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:00.378188 1157708 cri.go:89] found id: ""
	I0318 13:53:00.378224 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.378236 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:00.378244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:00.378313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:00.418940 1157708 cri.go:89] found id: ""
	I0318 13:53:00.418972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.418981 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:00.418987 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:00.419039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:00.461471 1157708 cri.go:89] found id: ""
	I0318 13:53:00.461502 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.461511 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:00.461518 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:00.461572 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:00.498781 1157708 cri.go:89] found id: ""
	I0318 13:53:00.498812 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.498821 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:00.498827 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:00.498885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:00.540359 1157708 cri.go:89] found id: ""
	I0318 13:53:00.540395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.540407 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:00.540414 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:00.540480 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:00.583597 1157708 cri.go:89] found id: ""
	I0318 13:53:00.583628 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.583636 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:00.583648 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:00.583666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:00.639498 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:00.639534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:00.655764 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:00.655792 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:00.742351 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:00.742386 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:00.742400 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:00.825250 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:00.825298 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:59.651495 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.651843 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.709438 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.208439 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.810519 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.308525 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:03.373938 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:03.389723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:03.389796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:03.429675 1157708 cri.go:89] found id: ""
	I0318 13:53:03.429710 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.429723 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:03.429732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:03.429803 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:03.468732 1157708 cri.go:89] found id: ""
	I0318 13:53:03.468768 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.468780 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:03.468788 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:03.468841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:03.510562 1157708 cri.go:89] found id: ""
	I0318 13:53:03.510589 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.510598 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:03.510604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:03.510667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:03.549842 1157708 cri.go:89] found id: ""
	I0318 13:53:03.549896 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.549909 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:03.549918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:03.549984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:03.590036 1157708 cri.go:89] found id: ""
	I0318 13:53:03.590076 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.590086 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:03.590093 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:03.590146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:03.635546 1157708 cri.go:89] found id: ""
	I0318 13:53:03.635573 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.635585 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:03.635593 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:03.635660 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:03.678634 1157708 cri.go:89] found id: ""
	I0318 13:53:03.678663 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.678671 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:03.678677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:03.678735 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:03.719666 1157708 cri.go:89] found id: ""
	I0318 13:53:03.719698 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.719709 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:03.719721 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:03.719736 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:03.762353 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:03.762388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:03.817484 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:03.817521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:03.832820 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:03.832850 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:03.913094 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:03.913115 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:03.913130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:06.502556 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:06.517682 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:06.517745 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:06.562167 1157708 cri.go:89] found id: ""
	I0318 13:53:06.562202 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.562215 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:06.562223 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:06.562294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:06.601910 1157708 cri.go:89] found id: ""
	I0318 13:53:06.601945 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.601954 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:06.601962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:06.602022 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:06.640652 1157708 cri.go:89] found id: ""
	I0318 13:53:06.640683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.640694 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:06.640702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:06.640778 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:06.686781 1157708 cri.go:89] found id: ""
	I0318 13:53:06.686809 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.686818 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:06.686824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:06.686893 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:06.727080 1157708 cri.go:89] found id: ""
	I0318 13:53:06.727107 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.727115 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:06.727121 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:06.727173 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:06.764550 1157708 cri.go:89] found id: ""
	I0318 13:53:06.764575 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.764583 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:06.764589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:06.764641 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:06.803978 1157708 cri.go:89] found id: ""
	I0318 13:53:06.804009 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.804019 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:06.804027 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:06.804091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:06.843983 1157708 cri.go:89] found id: ""
	I0318 13:53:06.844016 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.844027 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:06.844040 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:06.844058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:06.905389 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:06.905424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:06.956888 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:06.956924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:06.973551 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:06.973594 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:07.045945 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:07.045973 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:07.045991 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:04.150852 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.151454 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.656073 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.211223 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.707939 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.808218 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.309991 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:11.310190 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.635227 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:09.650166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:09.650246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:09.695126 1157708 cri.go:89] found id: ""
	I0318 13:53:09.695153 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.695162 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:09.695168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:09.695221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:09.740475 1157708 cri.go:89] found id: ""
	I0318 13:53:09.740507 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.740516 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:09.740522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:09.740591 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:09.779078 1157708 cri.go:89] found id: ""
	I0318 13:53:09.779108 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.779119 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:09.779128 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:09.779186 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:09.821252 1157708 cri.go:89] found id: ""
	I0318 13:53:09.821285 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.821297 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:09.821306 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:09.821376 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:09.860500 1157708 cri.go:89] found id: ""
	I0318 13:53:09.860537 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.860550 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:09.860558 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:09.860622 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:09.903447 1157708 cri.go:89] found id: ""
	I0318 13:53:09.903475 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.903486 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:09.903494 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:09.903550 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:09.941620 1157708 cri.go:89] found id: ""
	I0318 13:53:09.941648 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.941661 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:09.941679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:09.941731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:09.980066 1157708 cri.go:89] found id: ""
	I0318 13:53:09.980101 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.980113 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:09.980125 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:09.980142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:10.036960 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:10.037000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:10.051329 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:10.051361 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:10.130896 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:10.130925 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:10.130942 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:10.212205 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:10.212236 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:12.754623 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:12.769956 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:12.770034 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:12.809006 1157708 cri.go:89] found id: ""
	I0318 13:53:12.809032 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.809043 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:12.809051 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:12.809113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:12.852354 1157708 cri.go:89] found id: ""
	I0318 13:53:12.852390 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.852400 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:12.852407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:12.852476 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:12.891891 1157708 cri.go:89] found id: ""
	I0318 13:53:12.891923 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.891933 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:12.891940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:12.891991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:12.931753 1157708 cri.go:89] found id: ""
	I0318 13:53:12.931785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.931795 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:12.931803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:12.931872 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:12.971622 1157708 cri.go:89] found id: ""
	I0318 13:53:12.971653 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.971662 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:12.971669 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:12.971731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:11.151234 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.157081 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:10.708177 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.209203 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.315183 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.808738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.009893 1157708 cri.go:89] found id: ""
	I0318 13:53:13.009930 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.009943 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:13.009952 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:13.010021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:13.045361 1157708 cri.go:89] found id: ""
	I0318 13:53:13.045396 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.045404 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:13.045411 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:13.045474 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:13.087659 1157708 cri.go:89] found id: ""
	I0318 13:53:13.087686 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.087696 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:13.087706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:13.087721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:13.129979 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:13.130014 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:13.183802 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:13.183836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:13.198808 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:13.198840 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:13.272736 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:13.272764 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:13.272783 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:15.870196 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:15.887480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:15.887551 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:15.923871 1157708 cri.go:89] found id: ""
	I0318 13:53:15.923899 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.923907 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:15.923913 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:15.923976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:15.963870 1157708 cri.go:89] found id: ""
	I0318 13:53:15.963906 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.963917 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:15.963925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:15.963997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:16.009781 1157708 cri.go:89] found id: ""
	I0318 13:53:16.009815 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.009828 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:16.009837 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:16.009905 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:16.047673 1157708 cri.go:89] found id: ""
	I0318 13:53:16.047708 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.047718 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:16.047727 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:16.047793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:16.089419 1157708 cri.go:89] found id: ""
	I0318 13:53:16.089447 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.089455 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:16.089461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:16.089511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:16.133563 1157708 cri.go:89] found id: ""
	I0318 13:53:16.133594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.133604 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:16.133611 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:16.133685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:16.174369 1157708 cri.go:89] found id: ""
	I0318 13:53:16.174404 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.174415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:16.174423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:16.174491 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:16.219334 1157708 cri.go:89] found id: ""
	I0318 13:53:16.219360 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.219367 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:16.219376 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:16.219389 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:16.273468 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:16.273507 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:16.288584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:16.288612 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:16.366575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:16.366602 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:16.366620 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:16.451031 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:16.451071 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:15.650907 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.151434 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.708015 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:17.710036 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.311437 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.807854 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.997536 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:19.014995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:19.015065 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:19.064686 1157708 cri.go:89] found id: ""
	I0318 13:53:19.064719 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.064731 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:19.064739 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:19.064793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:19.110598 1157708 cri.go:89] found id: ""
	I0318 13:53:19.110629 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.110640 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:19.110648 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:19.110739 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:19.156628 1157708 cri.go:89] found id: ""
	I0318 13:53:19.156652 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.156660 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:19.156668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:19.156730 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:19.205993 1157708 cri.go:89] found id: ""
	I0318 13:53:19.206029 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.206042 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:19.206049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:19.206118 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:19.253902 1157708 cri.go:89] found id: ""
	I0318 13:53:19.253935 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.253952 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:19.253960 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:19.254036 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:19.296550 1157708 cri.go:89] found id: ""
	I0318 13:53:19.296583 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.296594 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:19.296602 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:19.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:19.337316 1157708 cri.go:89] found id: ""
	I0318 13:53:19.337349 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.337360 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:19.337369 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:19.337446 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:19.381503 1157708 cri.go:89] found id: ""
	I0318 13:53:19.381546 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.381565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:19.381579 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:19.381603 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:19.461665 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:19.461691 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:19.461707 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:19.548291 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:19.548348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:19.591296 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:19.591335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:19.648740 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:19.648776 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.164970 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:22.180740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:22.180806 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:22.223787 1157708 cri.go:89] found id: ""
	I0318 13:53:22.223820 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.223833 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:22.223840 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:22.223908 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:22.266751 1157708 cri.go:89] found id: ""
	I0318 13:53:22.266785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.266797 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:22.266805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:22.266876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:22.311669 1157708 cri.go:89] found id: ""
	I0318 13:53:22.311701 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.311712 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:22.311721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:22.311816 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:22.354687 1157708 cri.go:89] found id: ""
	I0318 13:53:22.354722 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.354733 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:22.354742 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:22.354807 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:22.395741 1157708 cri.go:89] found id: ""
	I0318 13:53:22.395767 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.395776 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:22.395782 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:22.395832 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:22.434506 1157708 cri.go:89] found id: ""
	I0318 13:53:22.434539 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.434550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:22.434559 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:22.434612 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:22.474583 1157708 cri.go:89] found id: ""
	I0318 13:53:22.474612 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.474621 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:22.474627 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:22.474690 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:22.521898 1157708 cri.go:89] found id: ""
	I0318 13:53:22.521943 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.521955 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:22.521968 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:22.521989 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.537679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:22.537711 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:22.619575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:22.619605 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:22.619621 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:22.704206 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:22.704265 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:22.753470 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:22.753502 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:20.650340 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.653036 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.213398 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.709150 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.808837 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.308831 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.311578 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:25.329917 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:25.329979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:25.373784 1157708 cri.go:89] found id: ""
	I0318 13:53:25.373818 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.373826 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:25.373833 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:25.373901 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:25.422490 1157708 cri.go:89] found id: ""
	I0318 13:53:25.422516 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.422526 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:25.422532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:25.422597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:25.459523 1157708 cri.go:89] found id: ""
	I0318 13:53:25.459552 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.459560 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:25.459567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:25.459627 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:25.495647 1157708 cri.go:89] found id: ""
	I0318 13:53:25.495683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.495695 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:25.495702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:25.495772 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:25.534582 1157708 cri.go:89] found id: ""
	I0318 13:53:25.534617 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.534626 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:25.534632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:25.534704 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:25.577526 1157708 cri.go:89] found id: ""
	I0318 13:53:25.577558 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.577566 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:25.577573 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:25.577687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:25.616403 1157708 cri.go:89] found id: ""
	I0318 13:53:25.616433 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.616445 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:25.616453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:25.616527 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:25.660444 1157708 cri.go:89] found id: ""
	I0318 13:53:25.660474 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.660482 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:25.660492 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:25.660506 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:25.715595 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:25.715641 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:25.730358 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:25.730390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:25.803153 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:25.803239 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:25.803261 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:25.885339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:25.885388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:25.150276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.151389 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.214042 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.710185 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.807095 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:29.807177 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:28.433506 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:28.449402 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:28.449481 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:28.490972 1157708 cri.go:89] found id: ""
	I0318 13:53:28.491007 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.491019 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:28.491028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:28.491094 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:28.531406 1157708 cri.go:89] found id: ""
	I0318 13:53:28.531439 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.531451 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:28.531460 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:28.531513 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:28.570299 1157708 cri.go:89] found id: ""
	I0318 13:53:28.570334 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.570345 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:28.570352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:28.570408 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:28.607950 1157708 cri.go:89] found id: ""
	I0318 13:53:28.607979 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.607987 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:28.607994 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:28.608066 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:28.648710 1157708 cri.go:89] found id: ""
	I0318 13:53:28.648744 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.648755 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:28.648762 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:28.648830 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:28.691071 1157708 cri.go:89] found id: ""
	I0318 13:53:28.691102 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.691114 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:28.691122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:28.691183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:28.734399 1157708 cri.go:89] found id: ""
	I0318 13:53:28.734438 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.734452 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:28.734461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:28.734548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:28.774859 1157708 cri.go:89] found id: ""
	I0318 13:53:28.774891 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.774902 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:28.774912 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:28.774927 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:28.831420 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:28.831459 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:28.847970 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:28.848008 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:28.926007 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:28.926034 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:28.926051 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:29.007525 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:29.007577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:31.555401 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:31.570964 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:31.571046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:31.611400 1157708 cri.go:89] found id: ""
	I0318 13:53:31.611427 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.611438 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:31.611445 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:31.611510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:31.654572 1157708 cri.go:89] found id: ""
	I0318 13:53:31.654602 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.654614 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:31.654622 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:31.654725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:31.692649 1157708 cri.go:89] found id: ""
	I0318 13:53:31.692673 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.692681 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:31.692686 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:31.692748 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:31.732208 1157708 cri.go:89] found id: ""
	I0318 13:53:31.732233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.732244 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:31.732253 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:31.732320 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:31.774132 1157708 cri.go:89] found id: ""
	I0318 13:53:31.774163 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.774172 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:31.774178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:31.774234 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:31.813558 1157708 cri.go:89] found id: ""
	I0318 13:53:31.813582 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.813590 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:31.813597 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:31.813651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:31.862024 1157708 cri.go:89] found id: ""
	I0318 13:53:31.862057 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.862070 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:31.862077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:31.862146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:31.903941 1157708 cri.go:89] found id: ""
	I0318 13:53:31.903972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.903982 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:31.903992 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:31.904006 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:31.957327 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:31.957366 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:31.973337 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:31.973380 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:32.053702 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:32.053730 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:32.053744 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:32.134859 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:32.134911 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:29.649648 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.651426 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.651936 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:30.208512 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:32.709020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.808276 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.811370 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:36.314374 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:34.683335 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:34.700383 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:34.700490 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:34.744387 1157708 cri.go:89] found id: ""
	I0318 13:53:34.744420 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.744432 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:34.744441 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:34.744509 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:34.788122 1157708 cri.go:89] found id: ""
	I0318 13:53:34.788150 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.788160 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:34.788166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:34.788221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:34.834760 1157708 cri.go:89] found id: ""
	I0318 13:53:34.834795 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.834808 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:34.834817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:34.834894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:34.882028 1157708 cri.go:89] found id: ""
	I0318 13:53:34.882062 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.882073 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:34.882081 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:34.882150 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:34.933339 1157708 cri.go:89] found id: ""
	I0318 13:53:34.933364 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.933374 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:34.933384 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:34.933451 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:34.972362 1157708 cri.go:89] found id: ""
	I0318 13:53:34.972395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.972407 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:34.972416 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:34.972486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:35.008949 1157708 cri.go:89] found id: ""
	I0318 13:53:35.008986 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.008999 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:35.009007 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:35.009080 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:35.054698 1157708 cri.go:89] found id: ""
	I0318 13:53:35.054733 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.054742 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:35.054756 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:35.054770 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:35.109391 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:35.109450 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:35.126785 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:35.126818 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:35.214303 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:35.214329 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:35.214342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:35.298705 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:35.298750 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:37.843701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:37.859330 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:37.859415 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:37.903428 1157708 cri.go:89] found id: ""
	I0318 13:53:37.903466 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.903479 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:37.903497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:37.903560 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:37.943687 1157708 cri.go:89] found id: ""
	I0318 13:53:37.943716 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.943727 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:37.943735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:37.943804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:37.986201 1157708 cri.go:89] found id: ""
	I0318 13:53:37.986233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.986244 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:37.986252 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:37.986322 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:36.151976 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.152281 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:35.209205 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:37.709122 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.806794 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.807552 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.026776 1157708 cri.go:89] found id: ""
	I0318 13:53:38.026813 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.026825 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:38.026832 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:38.026907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:38.073057 1157708 cri.go:89] found id: ""
	I0318 13:53:38.073088 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.073098 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:38.073105 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:38.073172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:38.110576 1157708 cri.go:89] found id: ""
	I0318 13:53:38.110611 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.110624 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:38.110632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:38.110702 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:38.154293 1157708 cri.go:89] found id: ""
	I0318 13:53:38.154319 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.154327 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:38.154338 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:38.154414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:38.195407 1157708 cri.go:89] found id: ""
	I0318 13:53:38.195434 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.195444 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:38.195454 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:38.195469 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:38.254159 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:38.254210 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:38.269143 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:38.269175 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:38.349819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:38.349845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:38.349864 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:38.435121 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:38.435164 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.982438 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:40.998483 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:40.998559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:41.037470 1157708 cri.go:89] found id: ""
	I0318 13:53:41.037497 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.037506 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:41.037512 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:41.037583 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:41.078428 1157708 cri.go:89] found id: ""
	I0318 13:53:41.078463 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.078473 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:41.078482 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:41.078548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:41.121342 1157708 cri.go:89] found id: ""
	I0318 13:53:41.121371 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.121382 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:41.121391 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:41.121482 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:41.164124 1157708 cri.go:89] found id: ""
	I0318 13:53:41.164149 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.164159 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:41.164167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:41.164229 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:41.210294 1157708 cri.go:89] found id: ""
	I0318 13:53:41.210321 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.210329 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:41.210336 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:41.210407 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:41.253934 1157708 cri.go:89] found id: ""
	I0318 13:53:41.253957 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.253967 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:41.253973 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:41.254039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:41.298817 1157708 cri.go:89] found id: ""
	I0318 13:53:41.298849 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.298861 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:41.298870 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:41.298936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:41.344109 1157708 cri.go:89] found id: ""
	I0318 13:53:41.344137 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.344146 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:41.344156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:41.344170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:41.401026 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:41.401061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:41.416197 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:41.416229 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:41.495349 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:41.495375 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:41.495393 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:41.578201 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:41.578253 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.651687 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:43.152619 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.208445 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.208613 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.210573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.808665 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:45.309099 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.126601 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:44.140971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:44.141048 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:44.184758 1157708 cri.go:89] found id: ""
	I0318 13:53:44.184786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.184794 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:44.184801 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:44.184851 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:44.230793 1157708 cri.go:89] found id: ""
	I0318 13:53:44.230824 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.230836 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:44.230842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:44.230916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:44.269561 1157708 cri.go:89] found id: ""
	I0318 13:53:44.269594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.269606 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:44.269614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:44.269680 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:44.310847 1157708 cri.go:89] found id: ""
	I0318 13:53:44.310878 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.310889 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:44.310898 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:44.310970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:44.350827 1157708 cri.go:89] found id: ""
	I0318 13:53:44.350860 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.350878 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:44.350887 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:44.350956 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:44.389693 1157708 cri.go:89] found id: ""
	I0318 13:53:44.389721 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.389730 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:44.389735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:44.389804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:44.429254 1157708 cri.go:89] found id: ""
	I0318 13:53:44.429280 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.429289 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:44.429303 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:44.429354 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:44.468484 1157708 cri.go:89] found id: ""
	I0318 13:53:44.468513 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.468525 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:44.468538 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:44.468555 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:44.525012 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:44.525058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:44.541638 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:44.541668 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:44.621779 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:44.621801 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:44.621814 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:44.706797 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:44.706884 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:47.253569 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:47.268808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:47.268888 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:47.313191 1157708 cri.go:89] found id: ""
	I0318 13:53:47.313220 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.313232 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:47.313240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:47.313307 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:47.357567 1157708 cri.go:89] found id: ""
	I0318 13:53:47.357600 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.357611 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:47.357619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:47.357688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:47.392300 1157708 cri.go:89] found id: ""
	I0318 13:53:47.392341 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.392352 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:47.392366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:47.392437 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:47.432800 1157708 cri.go:89] found id: ""
	I0318 13:53:47.432830 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.432842 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:47.432857 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:47.432921 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:47.469563 1157708 cri.go:89] found id: ""
	I0318 13:53:47.469591 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.469599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:47.469605 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:47.469668 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:47.508770 1157708 cri.go:89] found id: ""
	I0318 13:53:47.508799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.508810 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:47.508820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:47.508880 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:47.549876 1157708 cri.go:89] found id: ""
	I0318 13:53:47.549909 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.549921 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:47.549930 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:47.549997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:47.591385 1157708 cri.go:89] found id: ""
	I0318 13:53:47.591413 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.591421 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:47.591431 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:47.591446 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:47.646284 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:47.646313 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:47.662609 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:47.662639 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:47.737371 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:47.737398 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:47.737415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:47.817311 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:47.817342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:45.652845 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.150199 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:46.707734 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.709977 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:47.807238 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.308767 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:50.380029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:50.380109 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:50.427452 1157708 cri.go:89] found id: ""
	I0318 13:53:50.427484 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.427496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:50.427505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:50.427579 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:50.466766 1157708 cri.go:89] found id: ""
	I0318 13:53:50.466793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.466801 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:50.466808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:50.466894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:50.506768 1157708 cri.go:89] found id: ""
	I0318 13:53:50.506799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.506811 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:50.506819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:50.506882 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:50.545554 1157708 cri.go:89] found id: ""
	I0318 13:53:50.545592 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.545605 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:50.545613 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:50.545685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:50.583949 1157708 cri.go:89] found id: ""
	I0318 13:53:50.583984 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.583995 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:50.584004 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:50.584083 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:50.624730 1157708 cri.go:89] found id: ""
	I0318 13:53:50.624763 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.624774 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:50.624783 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:50.624853 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:50.664300 1157708 cri.go:89] found id: ""
	I0318 13:53:50.664346 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.664358 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:50.664366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:50.664420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:50.702760 1157708 cri.go:89] found id: ""
	I0318 13:53:50.702793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.702805 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:50.702817 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:50.702833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:50.757188 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:50.757237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:50.772151 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:50.772195 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:50.856872 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:50.856898 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:50.856917 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:50.937706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:50.937749 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:50.654814 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.151970 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.710233 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.209443 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:52.309529 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:54.809399 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.481836 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:53.497792 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:53.497856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:53.535376 1157708 cri.go:89] found id: ""
	I0318 13:53:53.535411 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.535420 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:53.535427 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:53.535486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:53.575002 1157708 cri.go:89] found id: ""
	I0318 13:53:53.575030 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.575042 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:53.575050 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:53.575119 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:53.615880 1157708 cri.go:89] found id: ""
	I0318 13:53:53.615919 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.615931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:53.615940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:53.616007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:53.681746 1157708 cri.go:89] found id: ""
	I0318 13:53:53.681786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.681799 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:53.681810 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:53.681887 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:53.725219 1157708 cri.go:89] found id: ""
	I0318 13:53:53.725241 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.725250 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:53.725256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:53.725317 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:53.766969 1157708 cri.go:89] found id: ""
	I0318 13:53:53.767006 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.767018 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:53.767026 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:53.767091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:53.802103 1157708 cri.go:89] found id: ""
	I0318 13:53:53.802134 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.802145 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:53.802157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:53.802210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:53.843054 1157708 cri.go:89] found id: ""
	I0318 13:53:53.843085 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.843093 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:53.843103 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:53.843117 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:53.899794 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:53.899836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:53.915559 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:53.915592 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:53.996410 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:53.996438 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:53.996456 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:54.085588 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:54.085628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:56.632201 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:56.648183 1157708 kubeadm.go:591] duration metric: took 4m3.550073086s to restartPrimaryControlPlane
	W0318 13:53:56.648381 1157708 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:53:56.648422 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:53:55.152626 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.650951 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:55.209511 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.709324 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.710029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.666187 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.017736279s)
	I0318 13:53:59.666270 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:53:59.682887 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:53:59.694626 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:53:59.706577 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:53:59.706599 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:53:59.706648 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:53:59.718311 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:53:59.718371 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:53:59.729298 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:53:59.741351 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:53:59.741401 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:53:59.753652 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.765642 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:53:59.765695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.778055 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:53:59.789994 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:53:59.790042 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:53:59.801292 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:53:59.879414 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:53:59.879516 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:00.046477 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:00.046660 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:00.046819 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:00.257070 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:00.259191 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:00.259333 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:00.259434 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:00.259549 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:00.259658 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:00.259782 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:00.259857 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:00.259949 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:00.260033 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:00.260136 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:00.260244 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:00.260299 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:00.260394 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:00.423400 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:00.543983 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:00.796108 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:00.901121 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:00.918891 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:00.920502 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:00.920642 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:01.094176 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:53:57.306878 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.308670 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:01.096397 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:54:01.096539 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:01.107816 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:01.108753 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:01.109641 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:01.111913 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:00.150985 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.151139 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.208577 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.209527 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.701940 1157416 pod_ready.go:81] duration metric: took 4m0.000915275s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:04.701995 1157416 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:04.702022 1157416 pod_ready.go:38] duration metric: took 4m12.048388069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:04.702063 1157416 kubeadm.go:591] duration metric: took 4m22.220919415s to restartPrimaryControlPlane
	W0318 13:54:04.702133 1157416 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:04.702168 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:01.807445 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.308435 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.151252 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.152296 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.162574 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.809148 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.811335 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:11.306999 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:10.650696 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:12.651741 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:13.308835 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.807754 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.150875 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:17.653698 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:18.308137 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.308720 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.152545 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.650685 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.807655 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:24.807765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:25.150664 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:27.650092 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:26.808311 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:29.311683 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:31.301320 1157887 pod_ready.go:81] duration metric: took 4m0.001048401s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:31.301351 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:31.301372 1157887 pod_ready.go:38] duration metric: took 4m12.063560637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:31.301397 1157887 kubeadm.go:591] duration metric: took 4m19.202321881s to restartPrimaryControlPlane
	W0318 13:54:31.301478 1157887 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:31.301505 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:29.651334 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:32.152059 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:34.651230 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.151130 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.018723 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.31652367s)
	I0318 13:54:37.018822 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:54:37.036348 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:54:37.047932 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:54:37.058846 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:54:37.058875 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:54:37.058920 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:54:37.069333 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:54:37.069396 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:54:37.080053 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:54:37.090110 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:54:37.090170 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:54:37.101032 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.111052 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:54:37.111124 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.121867 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:54:37.132057 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:54:37.132104 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:54:37.143057 1157416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:54:37.368813 1157416 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:54:41.111826 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:54:41.111977 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:41.112236 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:39.151250 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:41.652026 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:43.652929 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.082340 1157416 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 13:54:46.082410 1157416 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:46.082482 1157416 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:46.082561 1157416 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:46.082639 1157416 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:46.082692 1157416 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:46.084374 1157416 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:46.084495 1157416 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:46.084584 1157416 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:46.084681 1157416 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:46.084767 1157416 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:46.084844 1157416 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:46.084933 1157416 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:46.085039 1157416 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:46.085131 1157416 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:46.085255 1157416 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:46.085344 1157416 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:46.085415 1157416 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:46.085491 1157416 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:46.085569 1157416 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:46.085637 1157416 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 13:54:46.085704 1157416 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:46.085791 1157416 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:46.085894 1157416 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:46.086010 1157416 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:46.086104 1157416 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:54:46.087481 1157416 out.go:204]   - Booting up control plane ...
	I0318 13:54:46.087576 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:46.087642 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:46.087698 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:46.087782 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:46.087865 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:46.087917 1157416 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:46.088051 1157416 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:46.088146 1157416 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003020 seconds
	I0318 13:54:46.088306 1157416 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:54:46.088501 1157416 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:54:46.088585 1157416 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:54:46.088770 1157416 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-537236 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:54:46.088826 1157416 kubeadm.go:309] [bootstrap-token] Using token: fk6yfh.vd0dmh72kd97vm2h
	I0318 13:54:46.091265 1157416 out.go:204]   - Configuring RBAC rules ...
	I0318 13:54:46.091375 1157416 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:54:46.091449 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:54:46.091656 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:54:46.091839 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:54:46.092014 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:54:46.092136 1157416 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:54:46.092289 1157416 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:54:46.092370 1157416 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:54:46.092436 1157416 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:54:46.092445 1157416 kubeadm.go:309] 
	I0318 13:54:46.092513 1157416 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:54:46.092522 1157416 kubeadm.go:309] 
	I0318 13:54:46.092588 1157416 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:54:46.092594 1157416 kubeadm.go:309] 
	I0318 13:54:46.092614 1157416 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:54:46.092704 1157416 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:54:46.092749 1157416 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:54:46.092755 1157416 kubeadm.go:309] 
	I0318 13:54:46.092805 1157416 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:54:46.092818 1157416 kubeadm.go:309] 
	I0318 13:54:46.092892 1157416 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:54:46.092906 1157416 kubeadm.go:309] 
	I0318 13:54:46.092982 1157416 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:54:46.093100 1157416 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:54:46.093212 1157416 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:54:46.093225 1157416 kubeadm.go:309] 
	I0318 13:54:46.093335 1157416 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:54:46.093448 1157416 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:54:46.093457 1157416 kubeadm.go:309] 
	I0318 13:54:46.093539 1157416 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.093684 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:54:46.093717 1157416 kubeadm.go:309] 	--control-plane 
	I0318 13:54:46.093723 1157416 kubeadm.go:309] 
	I0318 13:54:46.093848 1157416 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:54:46.093860 1157416 kubeadm.go:309] 
	I0318 13:54:46.093946 1157416 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.094071 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:54:46.094105 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:54:46.094119 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:54:46.095717 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:54:46.112502 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:46.112797 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:46.152713 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:48.651676 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.096953 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:54:46.127007 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:54:46.178588 1157416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:54:46.178768 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:46.178785 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-537236 minikube.k8s.io/updated_at=2024_03_18T13_54_46_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=no-preload-537236 minikube.k8s.io/primary=true
	I0318 13:54:46.231974 1157416 ops.go:34] apiserver oom_adj: -16
	I0318 13:54:46.582048 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.082295 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.582447 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.082146 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.583155 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.082463 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.583104 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.153753 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:53.654740 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:50.082163 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:50.582159 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.082921 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.582616 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.082686 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.582520 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.082920 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.582281 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.082711 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.582110 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.112956 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:56.113210 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:55.082805 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:55.583034 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.082777 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.582491 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.082739 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.582854 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.082715 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.189802 1157416 kubeadm.go:1107] duration metric: took 12.011111335s to wait for elevateKubeSystemPrivileges
	W0318 13:54:58.189865 1157416 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:54:58.189878 1157416 kubeadm.go:393] duration metric: took 5m15.77131157s to StartCluster
	I0318 13:54:58.189991 1157416 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.190130 1157416 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:54:58.191965 1157416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.192315 1157416 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:54:58.194158 1157416 out.go:177] * Verifying Kubernetes components...
	I0318 13:54:58.192460 1157416 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:54:58.192549 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:54:58.194270 1157416 addons.go:69] Setting storage-provisioner=true in profile "no-preload-537236"
	I0318 13:54:58.195604 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:54:58.195628 1157416 addons.go:234] Setting addon storage-provisioner=true in "no-preload-537236"
	W0318 13:54:58.195646 1157416 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:54:58.194275 1157416 addons.go:69] Setting default-storageclass=true in profile "no-preload-537236"
	I0318 13:54:58.195741 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.195748 1157416 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-537236"
	I0318 13:54:58.194278 1157416 addons.go:69] Setting metrics-server=true in profile "no-preload-537236"
	I0318 13:54:58.195816 1157416 addons.go:234] Setting addon metrics-server=true in "no-preload-537236"
	W0318 13:54:58.195835 1157416 addons.go:243] addon metrics-server should already be in state true
	I0318 13:54:58.195864 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.196133 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196177 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196187 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196224 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196236 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196256 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.218212 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0318 13:54:58.218703 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34827
	I0318 13:54:58.218934 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0318 13:54:58.219717 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.219858 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220143 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220417 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220443 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220478 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220497 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220628 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220650 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220882 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220950 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220973 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.221491 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.221527 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.221736 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.222116 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.222138 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.226247 1157416 addons.go:234] Setting addon default-storageclass=true in "no-preload-537236"
	W0318 13:54:58.226271 1157416 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:54:58.226303 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.226691 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.226719 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.238772 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0318 13:54:58.239288 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.239925 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.239954 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.240375 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.240581 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.241297 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0318 13:54:58.241774 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.242300 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.242321 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.242787 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.243001 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.243033 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.245371 1157416 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:54:58.245038 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.246964 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:54:58.246981 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:54:58.246429 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0318 13:54:58.247010 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.248738 1157416 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:54:54.143902 1157263 pod_ready.go:81] duration metric: took 4m0.000627482s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:54.143947 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:54.143967 1157263 pod_ready.go:38] duration metric: took 4m9.565422592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:54.143994 1157263 kubeadm.go:591] duration metric: took 4m17.754456341s to restartPrimaryControlPlane
	W0318 13:54:54.144061 1157263 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:54.144092 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:58.247424 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.250418 1157416 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.250441 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:54:58.250459 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.250666 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.250683 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.250733 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251012 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.251354 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.251384 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251730 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.252053 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.252082 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.252627 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.252823 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.252974 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.253647 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254073 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.254102 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254393 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.254599 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.254720 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.254858 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.275785 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0318 13:54:58.276467 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.277007 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.277037 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.277396 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.277594 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.279419 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.279699 1157416 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.279719 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:54:58.279740 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.282813 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283168 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.283198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283319 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.283505 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.283643 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.283826 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.433881 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:54:58.466338 1157416 node_ready.go:35] waiting up to 6m0s for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485186 1157416 node_ready.go:49] node "no-preload-537236" has status "Ready":"True"
	I0318 13:54:58.485217 1157416 node_ready.go:38] duration metric: took 18.833477ms for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485230 1157416 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:58.527030 1157416 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545133 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.545175 1157416 pod_ready.go:81] duration metric: took 18.11215ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545191 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560108 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.560144 1157416 pod_ready.go:81] duration metric: took 14.943161ms for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560159 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.562894 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:54:58.562924 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:54:58.572477 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.572510 1157416 pod_ready.go:81] duration metric: took 12.342242ms for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.572523 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.594618 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.597140 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.644132 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:54:58.644166 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:54:58.734467 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:58.734499 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:54:58.760623 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:59.005259 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005305 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005668 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005692 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.005704 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005713 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005981 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005996 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.006028 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.020654 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.020682 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.022812 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.022814 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.022850 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.979647 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.382455448s)
	I0318 13:54:59.979723 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.979743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980124 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980223 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.980258 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.980281 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.980354 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980675 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980756 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.982424 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.270401 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.509719085s)
	I0318 13:55:00.270464 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.270481 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.272779 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.272794 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.272817 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.272828 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.272837 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.274705 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.274734 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.274759 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.274789 1157416 addons.go:470] Verifying addon metrics-server=true in "no-preload-537236"
	I0318 13:55:00.276931 1157416 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 13:55:00.278586 1157416 addons.go:505] duration metric: took 2.086117916s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 13:55:00.607578 1157416 pod_ready.go:92] pod "kube-proxy-6c4c5" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.607607 1157416 pod_ready.go:81] duration metric: took 2.035076209s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.607620 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626505 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.626531 1157416 pod_ready.go:81] duration metric: took 18.904572ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626540 1157416 pod_ready.go:38] duration metric: took 2.141296876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:00.626556 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:00.626612 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:00.677379 1157416 api_server.go:72] duration metric: took 2.484994048s to wait for apiserver process to appear ...
	I0318 13:55:00.677406 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:00.677426 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:55:00.694161 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:55:00.696445 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:55:00.696479 1157416 api_server.go:131] duration metric: took 19.065082ms to wait for apiserver health ...
	I0318 13:55:00.696492 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:00.707383 1157416 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:00.707417 1157416 system_pods.go:61] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:00.707421 1157416 system_pods.go:61] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:00.707425 1157416 system_pods.go:61] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:00.707429 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:00.707432 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:00.707435 1157416 system_pods.go:61] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:00.707438 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:00.707445 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:00.707450 1157416 system_pods.go:61] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:00.707459 1157416 system_pods.go:74] duration metric: took 10.96036ms to wait for pod list to return data ...
	I0318 13:55:00.707467 1157416 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:00.870267 1157416 default_sa.go:45] found service account: "default"
	I0318 13:55:00.870299 1157416 default_sa.go:55] duration metric: took 162.825175ms for default service account to be created ...
	I0318 13:55:00.870310 1157416 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:01.073950 1157416 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:01.073985 1157416 system_pods.go:89] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:01.073992 1157416 system_pods.go:89] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:01.073998 1157416 system_pods.go:89] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:01.074004 1157416 system_pods.go:89] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:01.074010 1157416 system_pods.go:89] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:01.074017 1157416 system_pods.go:89] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:01.074035 1157416 system_pods.go:89] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:01.074055 1157416 system_pods.go:89] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:01.074069 1157416 system_pods.go:89] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:01.074085 1157416 system_pods.go:126] duration metric: took 203.766894ms to wait for k8s-apps to be running ...
	I0318 13:55:01.074100 1157416 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:01.074152 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:01.091165 1157416 system_svc.go:56] duration metric: took 17.056217ms WaitForService to wait for kubelet
	I0318 13:55:01.091195 1157416 kubeadm.go:576] duration metric: took 2.898817514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:01.091224 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:01.270664 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:01.270724 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:01.270737 1157416 node_conditions.go:105] duration metric: took 179.506857ms to run NodePressure ...
	I0318 13:55:01.270750 1157416 start.go:240] waiting for startup goroutines ...
	I0318 13:55:01.270758 1157416 start.go:245] waiting for cluster config update ...
	I0318 13:55:01.270769 1157416 start.go:254] writing updated cluster config ...
	I0318 13:55:01.271069 1157416 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:01.325353 1157416 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 13:55:01.327367 1157416 out.go:177] * Done! kubectl is now configured to use "no-preload-537236" cluster and "default" namespace by default
	I0318 13:55:03.715412 1157887 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.413874479s)
	I0318 13:55:03.715519 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:03.732767 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:03.743375 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:03.753393 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:03.753414 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:03.753457 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:55:03.763226 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:03.763289 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:03.774001 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:55:03.783943 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:03.783991 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:03.794580 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.803881 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:03.803921 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.813709 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:55:03.823096 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:03.823138 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:03.832790 1157887 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:03.891459 1157887 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:03.891672 1157887 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:04.056923 1157887 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:04.057055 1157887 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:04.057197 1157887 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:04.312932 1157887 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:04.314955 1157887 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:04.315063 1157887 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:04.315156 1157887 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:04.315286 1157887 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:04.315388 1157887 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:04.315490 1157887 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:04.315568 1157887 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:04.315668 1157887 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:04.315743 1157887 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:04.315844 1157887 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:04.315969 1157887 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:04.316034 1157887 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:04.316108 1157887 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:04.643155 1157887 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:04.927731 1157887 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:05.058875 1157887 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:05.221520 1157887 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:05.221985 1157887 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:05.224297 1157887 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:05.226200 1157887 out.go:204]   - Booting up control plane ...
	I0318 13:55:05.226326 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:05.226425 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:05.226520 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:05.244878 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:05.245461 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:05.245531 1157887 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:05.388215 1157887 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:11.393083 1157887 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004356 seconds
	I0318 13:55:11.393511 1157887 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:11.412586 1157887 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:11.939563 1157887 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:11.939844 1157887 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-569210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:12.457349 1157887 kubeadm.go:309] [bootstrap-token] Using token: z44dyw.tsw47dmn862zavdi
	I0318 13:55:12.458855 1157887 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:12.459037 1157887 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:12.466850 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:12.482822 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:12.488920 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:12.496947 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:12.507954 1157887 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:12.535337 1157887 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:12.763814 1157887 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:12.877248 1157887 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:12.878047 1157887 kubeadm.go:309] 
	I0318 13:55:12.878159 1157887 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:12.878183 1157887 kubeadm.go:309] 
	I0318 13:55:12.878291 1157887 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:12.878301 1157887 kubeadm.go:309] 
	I0318 13:55:12.878334 1157887 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:12.878432 1157887 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:12.878519 1157887 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:12.878531 1157887 kubeadm.go:309] 
	I0318 13:55:12.878603 1157887 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:12.878615 1157887 kubeadm.go:309] 
	I0318 13:55:12.878690 1157887 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:12.878703 1157887 kubeadm.go:309] 
	I0318 13:55:12.878762 1157887 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:12.878858 1157887 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:12.878974 1157887 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:12.878985 1157887 kubeadm.go:309] 
	I0318 13:55:12.879087 1157887 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:12.879164 1157887 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:12.879171 1157887 kubeadm.go:309] 
	I0318 13:55:12.879275 1157887 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879410 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:12.879464 1157887 kubeadm.go:309] 	--control-plane 
	I0318 13:55:12.879484 1157887 kubeadm.go:309] 
	I0318 13:55:12.879576 1157887 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:12.879586 1157887 kubeadm.go:309] 
	I0318 13:55:12.879719 1157887 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879871 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:12.883383 1157887 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:12.883432 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:55:12.883447 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:12.885248 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:12.886708 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:12.929444 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:13.043416 1157887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:13.043541 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.043567 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-569210 minikube.k8s.io/updated_at=2024_03_18T13_55_13_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=default-k8s-diff-port-569210 minikube.k8s.io/primary=true
	I0318 13:55:13.064927 1157887 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:13.286093 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.786780 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.286728 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.786442 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.287103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.786443 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.287138 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.113672 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:16.113963 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:16.787069 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.286490 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.786317 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.286840 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.786872 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.286911 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.786554 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.286216 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.786282 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.286590 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.787103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.286966 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.786928 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.286275 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.786464 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.286791 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.787028 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.938400 1157887 kubeadm.go:1107] duration metric: took 11.894943444s to wait for elevateKubeSystemPrivileges
	W0318 13:55:24.938440 1157887 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:24.938448 1157887 kubeadm.go:393] duration metric: took 5m12.933246555s to StartCluster
	I0318 13:55:24.938470 1157887 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.938621 1157887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:24.940984 1157887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.941286 1157887 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:24.943151 1157887 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:24.941329 1157887 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:24.941469 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:24.944770 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:24.944780 1157887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944830 1157887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.944845 1157887 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:24.944846 1157887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944851 1157887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944880 1157887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:24.944888 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	W0318 13:55:24.944897 1157887 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:24.944927 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.944881 1157887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-569210"
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945350 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945375 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945400 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945460 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.963173 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0318 13:55:24.963820 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.964695 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.964725 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.965120 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.965696 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.965735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.965976 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0318 13:55:24.966207 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0318 13:55:24.966502 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.966598 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.967058 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967062 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967083 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967100 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967467 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967603 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967671 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.968107 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.968146 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.971673 1157887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.971696 1157887 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:24.971729 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.972091 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.972129 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.986041 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0318 13:55:24.986481 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.986989 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.987009 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.987352 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.987605 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0318 13:55:24.987613 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.988061 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.988481 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.988499 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.988904 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.989082 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.989785 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.992033 1157887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:24.990673 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.991225 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0318 13:55:24.993532 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:24.993557 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:24.993587 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.995449 1157887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:24.994077 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.996749 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997153 1157887 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:24.997171 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:24.997191 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.997431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:24.997463 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:24.997466 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997665 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.997684 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.997746 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:24.998183 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.998273 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:24.998497 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:24.998701 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.998735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.999951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.000454 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000676 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.000865 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.001021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.001160 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.016442 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
	I0318 13:55:25.016827 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:25.017300 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:25.017328 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:25.017686 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:25.017906 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:25.019440 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:25.019694 1157887 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.019711 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:25.019731 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:25.022079 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022370 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.022398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022497 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.022645 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.022762 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.022937 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.188474 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:25.208092 1157887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218757 1157887 node_ready.go:49] node "default-k8s-diff-port-569210" has status "Ready":"True"
	I0318 13:55:25.218789 1157887 node_ready.go:38] duration metric: took 10.658955ms for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218829 1157887 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:25.224381 1157887 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235938 1157887 pod_ready.go:92] pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.235962 1157887 pod_ready.go:81] duration metric: took 11.550686ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235971 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.242985 1157887 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.243014 1157887 pod_ready.go:81] duration metric: took 7.034818ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.243027 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255777 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.255801 1157887 pod_ready.go:81] duration metric: took 12.766918ms for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255811 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.301824 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:25.301846 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:25.330301 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:25.348473 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:25.348500 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:25.365746 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.398074 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:25.398099 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:25.423951 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:27.292115 1157887 pod_ready.go:92] pod "kube-proxy-2pp8z" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.292202 1157887 pod_ready.go:81] duration metric: took 2.036383518s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.292227 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299705 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.299732 1157887 pod_ready.go:81] duration metric: took 7.486631ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299743 1157887 pod_ready.go:38] duration metric: took 2.08090143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:27.299762 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:27.299824 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:27.706241 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.375885124s)
	I0318 13:55:27.706314 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706326 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706330 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.340547601s)
	I0318 13:55:27.706377 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706392 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706630 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.282631636s)
	I0318 13:55:27.706900 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.706828 1157887 api_server.go:72] duration metric: took 2.765497711s to wait for apiserver process to appear ...
	I0318 13:55:27.706940 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:27.706879 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.706979 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.706996 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707024 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706916 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706985 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:55:27.707343 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707366 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707372 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.707405 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707417 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707426 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707455 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.707682 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707696 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707706 1157887 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:27.708614 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.708664 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.708694 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.708783 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.709092 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.709151 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.709175 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.718110 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:55:27.719497 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:27.719518 1157887 api_server.go:131] duration metric: took 12.563372ms to wait for apiserver health ...
	I0318 13:55:27.719526 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:27.739882 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.739914 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.740263 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.740296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.740318 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.742102 1157887 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0318 13:55:27.368024 1157263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.223901258s)
	I0318 13:55:27.368118 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.388474 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:27.402749 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:27.417121 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:27.417184 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:27.417235 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:27.429920 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:27.429997 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:27.442468 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:27.454842 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:27.454913 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:27.467911 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.480201 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:27.480272 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.496430 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:27.512020 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:27.512092 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:27.528102 1157263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:27.601072 1157263 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:27.601235 1157263 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:27.796445 1157263 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:27.796574 1157263 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:27.796730 1157263 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:28.079026 1157263 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:27.743429 1157887 addons.go:505] duration metric: took 2.802098895s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0318 13:55:27.744694 1157887 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:27.744727 1157887 system_pods.go:61] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.744733 1157887 system_pods.go:61] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.744738 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.744744 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.744750 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.744756 1157887 system_pods.go:61] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.744764 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.744777 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.744783 1157887 system_pods.go:61] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending
	I0318 13:55:27.744797 1157887 system_pods.go:74] duration metric: took 25.264322ms to wait for pod list to return data ...
	I0318 13:55:27.744810 1157887 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:27.755398 1157887 default_sa.go:45] found service account: "default"
	I0318 13:55:27.755427 1157887 default_sa.go:55] duration metric: took 10.607153ms for default service account to be created ...
	I0318 13:55:27.755439 1157887 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:27.815477 1157887 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:27.815507 1157887 system_pods.go:89] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.815512 1157887 system_pods.go:89] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.815517 1157887 system_pods.go:89] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.815521 1157887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.815526 1157887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.815529 1157887 system_pods.go:89] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.815533 1157887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.815540 1157887 system_pods.go:89] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.815546 1157887 system_pods.go:89] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:27.815557 1157887 system_pods.go:126] duration metric: took 60.111832ms to wait for k8s-apps to be running ...
	I0318 13:55:27.815566 1157887 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:27.815610 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.834266 1157887 system_svc.go:56] duration metric: took 18.687554ms WaitForService to wait for kubelet
	I0318 13:55:27.834304 1157887 kubeadm.go:576] duration metric: took 2.892974502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:27.834345 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:28.013031 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:28.013095 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:28.013148 1157887 node_conditions.go:105] duration metric: took 178.79502ms to run NodePressure ...
	I0318 13:55:28.013169 1157887 start.go:240] waiting for startup goroutines ...
	I0318 13:55:28.013181 1157887 start.go:245] waiting for cluster config update ...
	I0318 13:55:28.013199 1157887 start.go:254] writing updated cluster config ...
	I0318 13:55:28.013519 1157887 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:28.092810 1157887 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:28.095783 1157887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-569210" cluster and "default" namespace by default
	I0318 13:55:28.080939 1157263 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:28.081056 1157263 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:28.081145 1157263 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:28.081249 1157263 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:28.082078 1157263 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:28.082860 1157263 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:28.083397 1157263 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:28.084597 1157263 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:28.084941 1157263 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:28.085603 1157263 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:28.086461 1157263 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:28.087265 1157263 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:28.087343 1157263 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:28.348996 1157263 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:28.516513 1157263 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:28.585513 1157263 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:28.817150 1157263 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:28.817900 1157263 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:28.820280 1157263 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:28.822114 1157263 out.go:204]   - Booting up control plane ...
	I0318 13:55:28.822217 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:28.822811 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:28.825310 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:28.845906 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:28.847013 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:28.847069 1157263 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:28.992421 1157263 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:35.495384 1157263 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502688 seconds
	I0318 13:55:35.495578 1157263 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:35.517088 1157263 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:36.049915 1157263 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:36.050163 1157263 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-173036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:36.571450 1157263 kubeadm.go:309] [bootstrap-token] Using token: a1fi6l.v36l7wrnalucsepl
	I0318 13:55:36.573263 1157263 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:36.573448 1157263 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:36.581322 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:36.594853 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:36.598538 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:36.602430 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:36.605534 1157263 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:36.621332 1157263 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:36.865518 1157263 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:36.990015 1157263 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:36.991079 1157263 kubeadm.go:309] 
	I0318 13:55:36.991168 1157263 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:36.991181 1157263 kubeadm.go:309] 
	I0318 13:55:36.991288 1157263 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:36.991299 1157263 kubeadm.go:309] 
	I0318 13:55:36.991320 1157263 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:36.991395 1157263 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:36.991475 1157263 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:36.991494 1157263 kubeadm.go:309] 
	I0318 13:55:36.991572 1157263 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:36.991581 1157263 kubeadm.go:309] 
	I0318 13:55:36.991646 1157263 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:36.991658 1157263 kubeadm.go:309] 
	I0318 13:55:36.991737 1157263 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:36.991839 1157263 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:36.991954 1157263 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:36.991966 1157263 kubeadm.go:309] 
	I0318 13:55:36.992073 1157263 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:36.992174 1157263 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:36.992186 1157263 kubeadm.go:309] 
	I0318 13:55:36.992304 1157263 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992477 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:36.992522 1157263 kubeadm.go:309] 	--control-plane 
	I0318 13:55:36.992532 1157263 kubeadm.go:309] 
	I0318 13:55:36.992642 1157263 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:36.992656 1157263 kubeadm.go:309] 
	I0318 13:55:36.992769 1157263 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992922 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:36.994542 1157263 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:36.994648 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:55:36.994660 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:36.996526 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:36.997929 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:37.047757 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:37.075078 1157263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:37.075167 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.075199 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-173036 minikube.k8s.io/updated_at=2024_03_18T13_55_37_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=embed-certs-173036 minikube.k8s.io/primary=true
	I0318 13:55:37.236857 1157263 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:37.422453 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.922622 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.423527 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.922743 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.422721 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.923438 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.422599 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.923170 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.422812 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.922526 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.422594 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.922835 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.423479 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.923114 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.422672 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.922883 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.422863 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.922770 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.423473 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.923125 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.423378 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.923366 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.422566 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.923231 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.422505 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.554542 1157263 kubeadm.go:1107] duration metric: took 12.479441091s to wait for elevateKubeSystemPrivileges
	W0318 13:55:49.554590 1157263 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:49.554602 1157263 kubeadm.go:393] duration metric: took 5m13.226983757s to StartCluster
	I0318 13:55:49.554626 1157263 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.554778 1157263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:49.556962 1157263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.557273 1157263 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:49.558774 1157263 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:49.557321 1157263 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:49.557488 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:49.560195 1157263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-173036"
	I0318 13:55:49.560201 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:49.560211 1157263 addons.go:69] Setting metrics-server=true in profile "embed-certs-173036"
	I0318 13:55:49.560237 1157263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-173036"
	I0318 13:55:49.560247 1157263 addons.go:234] Setting addon metrics-server=true in "embed-certs-173036"
	W0318 13:55:49.560254 1157263 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:49.560201 1157263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-173036"
	I0318 13:55:49.560282 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560302 1157263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-173036"
	W0318 13:55:49.560317 1157263 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:49.560388 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560644 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560676 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560678 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560716 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560777 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560803 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.577682 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0318 13:55:49.577714 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0318 13:55:49.578101 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0318 13:55:49.578261 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578285 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578493 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578880 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578907 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.578882 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578923 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579013 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.579036 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579302 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579333 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579538 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.579598 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579914 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.579955 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.580203 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.580238 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.583587 1157263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-173036"
	W0318 13:55:49.583610 1157263 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:49.583641 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.584009 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.584040 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.596862 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0318 13:55:49.597356 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.597859 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.598026 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.598110 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0318 13:55:49.598635 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599310 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.599331 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.599405 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0318 13:55:49.599732 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599874 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.600120 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.600135 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.600197 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.600439 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.601019 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.601052 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.602172 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.604115 1157263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:49.606034 1157263 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.606049 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:49.606065 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.603277 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.606323 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.608600 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.610213 1157263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:49.611511 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:49.611531 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:49.611545 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.609758 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.611598 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.611613 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.610550 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.611727 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.611868 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.611991 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.614689 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.615322 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.615403 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615531 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.615672 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.615773 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.620257 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0318 13:55:49.620653 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.621225 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.621243 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.621610 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.621790 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.623303 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.623566 1157263 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:49.623580 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:49.623594 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.626325 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.626733 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.626755 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.627028 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.627196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.627335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.627441 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.791524 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:49.847829 1157263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860595 1157263 node_ready.go:49] node "embed-certs-173036" has status "Ready":"True"
	I0318 13:55:49.860621 1157263 node_ready.go:38] duration metric: took 12.757412ms for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860631 1157263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:49.870524 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:49.917170 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:49.917197 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:49.965845 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:49.965871 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:49.969600 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.982887 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:50.023768 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:50.023795 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:50.139120 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:51.877589 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-ft594" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:51.877618 1157263 pod_ready.go:81] duration metric: took 2.007066644s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:51.877634 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.007908 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.02498147s)
	I0318 13:55:52.007966 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.007979 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008318 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008378 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008383 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.008408 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.008427 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008713 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008853 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.009491 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.039858476s)
	I0318 13:55:52.009567 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.009595 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010239 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010242 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010276 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.010289 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.010301 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010553 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010568 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010578 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.026035 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.026056 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.026364 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.026385 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.202596 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.063427726s)
	I0318 13:55:52.202663 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.202686 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.202999 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203021 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203032 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.203040 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.203321 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203338 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203352 1157263 addons.go:470] Verifying addon metrics-server=true in "embed-certs-173036"
	I0318 13:55:52.205372 1157263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 13:55:52.207184 1157263 addons.go:505] duration metric: took 2.649872416s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 13:55:52.391839 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.391878 1157263 pod_ready.go:81] duration metric: took 514.235543ms for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.391891 1157263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398044 1157263 pod_ready.go:92] pod "etcd-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.398075 1157263 pod_ready.go:81] duration metric: took 6.176672ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398091 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403790 1157263 pod_ready.go:92] pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.403809 1157263 pod_ready.go:81] duration metric: took 5.70927ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403817 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414956 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.414976 1157263 pod_ready.go:81] duration metric: took 11.153442ms for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414986 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674125 1157263 pod_ready.go:92] pod "kube-proxy-lp9mc" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.674151 1157263 pod_ready.go:81] duration metric: took 259.158776ms for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674160 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075385 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:53.075420 1157263 pod_ready.go:81] duration metric: took 401.251175ms for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075432 1157263 pod_ready.go:38] duration metric: took 3.214790175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:53.075452 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:53.075523 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:53.092916 1157263 api_server.go:72] duration metric: took 3.53560403s to wait for apiserver process to appear ...
	I0318 13:55:53.092948 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:53.093027 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:55:53.098715 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:55:53.100073 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:53.100102 1157263 api_server.go:131] duration metric: took 7.134408ms to wait for apiserver health ...
	I0318 13:55:53.100113 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:53.278961 1157263 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:53.278993 1157263 system_pods.go:61] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.278998 1157263 system_pods.go:61] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.279002 1157263 system_pods.go:61] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.279005 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.279010 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.279013 1157263 system_pods.go:61] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.279017 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.279023 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.279026 1157263 system_pods.go:61] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.279037 1157263 system_pods.go:74] duration metric: took 178.915393ms to wait for pod list to return data ...
	I0318 13:55:53.279047 1157263 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:53.475094 1157263 default_sa.go:45] found service account: "default"
	I0318 13:55:53.475123 1157263 default_sa.go:55] duration metric: took 196.069593ms for default service account to be created ...
	I0318 13:55:53.475133 1157263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:53.678384 1157263 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:53.678413 1157263 system_pods.go:89] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.678418 1157263 system_pods.go:89] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.678422 1157263 system_pods.go:89] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.678427 1157263 system_pods.go:89] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.678431 1157263 system_pods.go:89] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.678436 1157263 system_pods.go:89] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.678439 1157263 system_pods.go:89] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.678447 1157263 system_pods.go:89] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.678455 1157263 system_pods.go:89] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.678464 1157263 system_pods.go:126] duration metric: took 203.32588ms to wait for k8s-apps to be running ...
	I0318 13:55:53.678473 1157263 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:53.678531 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:53.698244 1157263 system_svc.go:56] duration metric: took 19.758793ms WaitForService to wait for kubelet
	I0318 13:55:53.698279 1157263 kubeadm.go:576] duration metric: took 4.140974066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:53.698307 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:53.876137 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:53.876162 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:53.876173 1157263 node_conditions.go:105] duration metric: took 177.861272ms to run NodePressure ...
	I0318 13:55:53.876184 1157263 start.go:240] waiting for startup goroutines ...
	I0318 13:55:53.876191 1157263 start.go:245] waiting for cluster config update ...
	I0318 13:55:53.876202 1157263 start.go:254] writing updated cluster config ...
	I0318 13:55:53.876907 1157263 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:53.931596 1157263 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:53.933499 1157263 out.go:177] * Done! kubectl is now configured to use "embed-certs-173036" cluster and "default" namespace by default
	I0318 13:55:56.115397 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:56.115674 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:56.115714 1157708 kubeadm.go:309] 
	I0318 13:55:56.115782 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:55:56.115840 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:55:56.115849 1157708 kubeadm.go:309] 
	I0318 13:55:56.115908 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:55:56.115979 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:55:56.116102 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:55:56.116112 1157708 kubeadm.go:309] 
	I0318 13:55:56.116242 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:55:56.116289 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:55:56.116349 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:55:56.116370 1157708 kubeadm.go:309] 
	I0318 13:55:56.116506 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:55:56.116645 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:55:56.116665 1157708 kubeadm.go:309] 
	I0318 13:55:56.116804 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:55:56.116897 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:55:56.117005 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:55:56.117094 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:55:56.117110 1157708 kubeadm.go:309] 
	I0318 13:55:56.117680 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:56.117813 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:55:56.117934 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 13:55:56.118052 1157708 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 13:55:56.118124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:55:57.920938 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.802776126s)
	I0318 13:55:57.921031 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:57.939226 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:57.952304 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:57.952342 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:57.952404 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:57.964632 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:57.964695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:57.977306 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:57.989728 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:57.989790 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:58.001661 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.013078 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:58.013160 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.024891 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:58.036171 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:58.036225 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:58.048156 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:58.128356 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:55:58.128445 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:58.297704 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:58.297897 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:58.298048 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:58.515521 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:58.517569 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:58.517679 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:58.517760 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:58.517830 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:58.517908 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:58.517980 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:58.518047 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:58.518280 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:58.519078 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:58.520081 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:58.521268 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:58.521861 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:58.521936 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:58.762418 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:58.999746 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:59.214448 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:59.402662 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:59.421555 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:59.423151 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:59.423233 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:59.560412 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:59.563125 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:55:59.563274 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:59.571364 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:59.572936 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:59.573987 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:59.586689 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:56:39.588627 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:56:39.588942 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:39.589128 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:44.589564 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:44.589852 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:54.590311 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:54.590619 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:14.591571 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:14.591866 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594170 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:54.594433 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594448 1157708 kubeadm.go:309] 
	I0318 13:57:54.594490 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:57:54.594540 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:57:54.594549 1157708 kubeadm.go:309] 
	I0318 13:57:54.594594 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:57:54.594641 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:57:54.594800 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:57:54.594811 1157708 kubeadm.go:309] 
	I0318 13:57:54.594950 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:57:54.595000 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:57:54.595046 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:57:54.595056 1157708 kubeadm.go:309] 
	I0318 13:57:54.595163 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:57:54.595297 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:57:54.595312 1157708 kubeadm.go:309] 
	I0318 13:57:54.595471 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:57:54.595605 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:57:54.595716 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:57:54.595812 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:57:54.595827 1157708 kubeadm.go:309] 
	I0318 13:57:54.596636 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:57:54.596805 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:57:54.596972 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 13:57:54.597014 1157708 kubeadm.go:393] duration metric: took 8m1.551231902s to StartCluster
	I0318 13:57:54.597076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:57:54.597174 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:57:54.649451 1157708 cri.go:89] found id: ""
	I0318 13:57:54.649484 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.649496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:57:54.649506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:57:54.649577 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:57:54.692278 1157708 cri.go:89] found id: ""
	I0318 13:57:54.692317 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.692339 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:57:54.692349 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:57:54.692427 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:57:54.731034 1157708 cri.go:89] found id: ""
	I0318 13:57:54.731062 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.731071 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:57:54.731077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:57:54.731135 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:57:54.769883 1157708 cri.go:89] found id: ""
	I0318 13:57:54.769913 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.769923 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:57:54.769931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:57:54.769996 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:57:54.808620 1157708 cri.go:89] found id: ""
	I0318 13:57:54.808648 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.808656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:57:54.808661 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:57:54.808715 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:57:54.849207 1157708 cri.go:89] found id: ""
	I0318 13:57:54.849245 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.849256 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:57:54.849264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:57:54.849334 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:57:54.918479 1157708 cri.go:89] found id: ""
	I0318 13:57:54.918508 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.918520 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:57:54.918528 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:57:54.918597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:57:54.958828 1157708 cri.go:89] found id: ""
	I0318 13:57:54.958861 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.958871 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:57:54.958887 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:57:54.958906 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:57:55.078045 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:57:55.078092 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:57:55.123043 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:57:55.123077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:57:55.180480 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:57:55.180518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:57:55.197264 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:57:55.197316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:57:55.291264 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0318 13:57:55.291325 1157708 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 13:57:55.291395 1157708 out.go:239] * 
	W0318 13:57:55.291477 1157708 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.291502 1157708 out.go:239] * 
	W0318 13:57:55.292511 1157708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:55.295566 1157708 out.go:177] 
	W0318 13:57:55.296840 1157708 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.296903 1157708 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 13:57:55.296941 1157708 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 13:57:55.298417 1157708 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.071970059Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770277071931535,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=32c32aa4-2e18-483d-8207-e66cf9bb5656 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.072410656Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e0c300ab-50c7-4b16-b763-a60a270b8be9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.072483965Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e0c300ab-50c7-4b16-b763-a60a270b8be9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.072518871Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e0c300ab-50c7-4b16-b763-a60a270b8be9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.109548748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44fca330-83fa-411d-924f-423555f15403 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.109651322Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44fca330-83fa-411d-924f-423555f15403 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.113213127Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=45c1b9f4-2c24-4ce9-ae59-905b4bd345fa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.113800134Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770277113662614,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=45c1b9f4-2c24-4ce9-ae59-905b4bd345fa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.116456178Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e11838f3-04c8-4e7e-98da-ecbe223857d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.116558512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e11838f3-04c8-4e7e-98da-ecbe223857d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.116615965Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e11838f3-04c8-4e7e-98da-ecbe223857d6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.154004833Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9b2b8b75-3bea-45b9-a414-eee67bd19583 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.154100437Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9b2b8b75-3bea-45b9-a414-eee67bd19583 name=/runtime.v1.RuntimeService/Version
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.159371025Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6114d520-7890-4c66-b1ea-688a5d6e0e66 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.159705292Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770277159685696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6114d520-7890-4c66-b1ea-688a5d6e0e66 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.160351939Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cbd3ebe1-9fbd-4bf0-9ecf-c4f14a0eae04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.160429532Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cbd3ebe1-9fbd-4bf0-9ecf-c4f14a0eae04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.160475888Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cbd3ebe1-9fbd-4bf0-9ecf-c4f14a0eae04 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.196330892Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=21a2c7da-f8db-4324-bb1c-7ccf9a27599d name=/runtime.v1.RuntimeService/Version
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.196435170Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=21a2c7da-f8db-4324-bb1c-7ccf9a27599d name=/runtime.v1.RuntimeService/Version
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.197631786Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c85112e6-6550-4f5a-a518-18ce7954505e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.198092614Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770277198072273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c85112e6-6550-4f5a-a518-18ce7954505e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.198539422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e40c0d6-b8d5-4e7c-b8ec-ab0512798b14 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.198615374Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e40c0d6-b8d5-4e7c-b8ec-ab0512798b14 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 13:57:57 old-k8s-version-909137 crio[647]: time="2024-03-18 13:57:57.198650255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7e40c0d6-b8d5-4e7c-b8ec-ab0512798b14 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar18 13:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052261] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043383] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.666130] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.485262] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.465886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.163261] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.162544] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.204190] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.135186] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.316905] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.427040] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +0.071901] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.095612] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[Mar18 13:50] kauditd_printk_skb: 46 callbacks suppressed
	[Mar18 13:54] systemd-fstab-generator[4988]: Ignoring "noauto" option for root device
	[Mar18 13:55] systemd-fstab-generator[5270]: Ignoring "noauto" option for root device
	[  +0.062731] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 13:57:57 up 8 min,  0 users,  load average: 0.12, 0.12, 0.09
	Linux old-k8s-version-909137 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001000c0, 0xc000beab40)
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]: goroutine 163 [select]:
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b2def0, 0x4f0ac20, 0xc000bfc780, 0x1, 0xc0001000c0)
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000255180, 0xc0001000c0)
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bc0840, 0xc000bd6fa0)
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 18 13:57:54 old-k8s-version-909137 kubelet[5448]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 18 13:57:54 old-k8s-version-909137 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 18 13:57:54 old-k8s-version-909137 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 18 13:57:55 old-k8s-version-909137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 18 13:57:55 old-k8s-version-909137 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 18 13:57:55 old-k8s-version-909137 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 18 13:57:55 old-k8s-version-909137 kubelet[5515]: I0318 13:57:55.639316    5515 server.go:416] Version: v1.20.0
	Mar 18 13:57:55 old-k8s-version-909137 kubelet[5515]: I0318 13:57:55.639786    5515 server.go:837] Client rotation is on, will bootstrap in background
	Mar 18 13:57:55 old-k8s-version-909137 kubelet[5515]: I0318 13:57:55.642436    5515 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 18 13:57:55 old-k8s-version-909137 kubelet[5515]: W0318 13:57:55.643447    5515 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 18 13:57:55 old-k8s-version-909137 kubelet[5515]: I0318 13:57:55.643687    5515 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-909137 -n old-k8s-version-909137
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 2 (256.095851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-909137" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (765.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210: exit status 3 (3.199914512s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:45:32.396749 1157787 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host
	E0318 13:45:32.396774 1157787 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-569210 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-569210 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154793712s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-569210 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210: exit status 3 (3.06062729s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0318 13:45:41.612796 1157857 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host
	E0318 13:45:41.612816 1157857 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.3:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-569210" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-537236 -n no-preload-537236
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:04:01.943595645 +0000 UTC m=+6519.330509904
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-537236 -n no-preload-537236
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-537236 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-537236 logs -n 25: (2.084563316s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-599578                           | kubernetes-upgrade-599578    | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:39 UTC |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-760389                                        | pause-760389                 | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:40 UTC |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-173866 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | disable-driver-mounts-173866                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:42 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-173036            | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-537236             | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC | 18 Mar 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-569210  | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC | 18 Mar 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-909137        | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-173036                 | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-537236                  | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-909137             | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-569210       | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:55 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:45:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:45:41.667747 1157887 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:45:41.667937 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.667952 1157887 out.go:304] Setting ErrFile to fd 2...
	I0318 13:45:41.667958 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.668616 1157887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:45:41.669251 1157887 out.go:298] Setting JSON to false
	I0318 13:45:41.670283 1157887 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19689,"bootTime":1710749853,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:45:41.670349 1157887 start.go:139] virtualization: kvm guest
	I0318 13:45:41.672702 1157887 out.go:177] * [default-k8s-diff-port-569210] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:45:41.674325 1157887 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:45:41.674336 1157887 notify.go:220] Checking for updates...
	I0318 13:45:41.675874 1157887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:45:41.677543 1157887 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:45:41.679053 1157887 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:45:41.680344 1157887 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:45:41.681702 1157887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:45:41.683304 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:45:41.683743 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.683792 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.698719 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0318 13:45:41.699154 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.699657 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.699676 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.699995 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.700168 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.700488 1157887 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:45:41.700763 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.700803 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.715824 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0318 13:45:41.716270 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.716688 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.716708 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.717004 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.717185 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.747564 1157887 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:45:41.748930 1157887 start.go:297] selected driver: kvm2
	I0318 13:45:41.748944 1157887 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.749059 1157887 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:45:41.749725 1157887 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.749819 1157887 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:45:41.764225 1157887 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:45:41.764607 1157887 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:45:41.764679 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:45:41.764692 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:45:41.764727 1157887 start.go:340] cluster config:
	{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.764824 1157887 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.766561 1157887 out.go:177] * Starting "default-k8s-diff-port-569210" primary control-plane node in "default-k8s-diff-port-569210" cluster
	I0318 13:45:40.044635 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:41.767747 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:45:41.767779 1157887 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:45:41.767799 1157887 cache.go:56] Caching tarball of preloaded images
	I0318 13:45:41.767876 1157887 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:45:41.767887 1157887 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:45:41.767986 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:45:41.768151 1157887 start.go:360] acquireMachinesLock for default-k8s-diff-port-569210: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:45:46.124607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:49.196561 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:55.276657 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:58.348606 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:04.428632 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:07.500592 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:13.584558 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:16.652578 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:22.732573 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:25.804745 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:31.884579 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:34.956708 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:41.036614 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:44.108576 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:50.188610 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:53.260646 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:59.340724 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:02.412698 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:08.492603 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:11.564634 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:17.644618 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:20.716642 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:26.796585 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:29.868690 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:35.948613 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:39.020607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:45.104563 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:48.172547 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:54.252608 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:57.324659 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:03.404600 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:06.476647 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:12.556609 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:15.628640 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:21.708597 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:24.780572 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:30.860662 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:33.932528 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:40.012616 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:43.084569 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:49.164622 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:52.236652 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:58.316619 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:49:01.321139 1157416 start.go:364] duration metric: took 4m21.279664055s to acquireMachinesLock for "no-preload-537236"
	I0318 13:49:01.321252 1157416 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:01.321260 1157416 fix.go:54] fixHost starting: 
	I0318 13:49:01.321627 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:01.321658 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:01.337337 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0318 13:49:01.337793 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:01.338235 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:49:01.338262 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:01.338703 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:01.338892 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:01.339025 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:49:01.340630 1157416 fix.go:112] recreateIfNeeded on no-preload-537236: state=Stopped err=<nil>
	I0318 13:49:01.340653 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	W0318 13:49:01.340785 1157416 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:01.342565 1157416 out.go:177] * Restarting existing kvm2 VM for "no-preload-537236" ...
	I0318 13:49:01.318340 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:01.318378 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.318795 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:49:01.318829 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.319041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:49:01.321007 1157263 machine.go:97] duration metric: took 4m37.382603693s to provisionDockerMachine
	I0318 13:49:01.321051 1157263 fix.go:56] duration metric: took 4m37.403420427s for fixHost
	I0318 13:49:01.321064 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 4m37.403446357s
	W0318 13:49:01.321088 1157263 start.go:713] error starting host: provision: host is not running
	W0318 13:49:01.321225 1157263 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 13:49:01.321242 1157263 start.go:728] Will try again in 5 seconds ...
	I0318 13:49:01.343844 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Start
	I0318 13:49:01.344003 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring networks are active...
	I0318 13:49:01.344698 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network default is active
	I0318 13:49:01.345062 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network mk-no-preload-537236 is active
	I0318 13:49:01.345378 1157416 main.go:141] libmachine: (no-preload-537236) Getting domain xml...
	I0318 13:49:01.346073 1157416 main.go:141] libmachine: (no-preload-537236) Creating domain...
	I0318 13:49:02.522163 1157416 main.go:141] libmachine: (no-preload-537236) Waiting to get IP...
	I0318 13:49:02.522935 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.523347 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.523420 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.523327 1158392 retry.go:31] will retry after 276.248352ms: waiting for machine to come up
	I0318 13:49:02.800962 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.801439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.801472 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.801381 1158392 retry.go:31] will retry after 318.94167ms: waiting for machine to come up
	I0318 13:49:03.121895 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.122276 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.122298 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.122254 1158392 retry.go:31] will retry after 353.742872ms: waiting for machine to come up
	I0318 13:49:03.477885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.478401 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.478439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.478360 1158392 retry.go:31] will retry after 481.537084ms: waiting for machine to come up
	I0318 13:49:03.960991 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.961432 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.961505 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.961416 1158392 retry.go:31] will retry after 647.244695ms: waiting for machine to come up
	I0318 13:49:04.610150 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:04.610563 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:04.610604 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:04.610512 1158392 retry.go:31] will retry after 577.22264ms: waiting for machine to come up
	I0318 13:49:06.321404 1157263 start.go:360] acquireMachinesLock for embed-certs-173036: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:49:05.189300 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:05.189688 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:05.189722 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:05.189635 1158392 retry.go:31] will retry after 1.064347528s: waiting for machine to come up
	I0318 13:49:06.255734 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:06.256071 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:06.256103 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:06.256016 1158392 retry.go:31] will retry after 1.359025709s: waiting for machine to come up
	I0318 13:49:07.616847 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:07.617313 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:07.617338 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:07.617265 1158392 retry.go:31] will retry after 1.844112s: waiting for machine to come up
	I0318 13:49:09.464239 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:09.464761 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:09.464788 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:09.464703 1158392 retry.go:31] will retry after 1.984375986s: waiting for machine to come up
	I0318 13:49:11.450609 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:11.451100 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:11.451153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:11.451037 1158392 retry.go:31] will retry after 1.944733714s: waiting for machine to come up
	I0318 13:49:13.397815 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:13.398238 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:13.398265 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:13.398190 1158392 retry.go:31] will retry after 2.44494826s: waiting for machine to come up
	I0318 13:49:15.845711 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:15.846169 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:15.846212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:15.846128 1158392 retry.go:31] will retry after 2.760857339s: waiting for machine to come up
	I0318 13:49:18.609516 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:18.609917 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:18.609942 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:18.609872 1158392 retry.go:31] will retry after 3.501792324s: waiting for machine to come up
	I0318 13:49:23.501689 1157708 start.go:364] duration metric: took 4m10.403284517s to acquireMachinesLock for "old-k8s-version-909137"
	I0318 13:49:23.501769 1157708 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:23.501783 1157708 fix.go:54] fixHost starting: 
	I0318 13:49:23.502238 1157708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:23.502279 1157708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:23.520223 1157708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0318 13:49:23.520696 1157708 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:23.521273 1157708 main.go:141] libmachine: Using API Version  1
	I0318 13:49:23.521304 1157708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:23.521693 1157708 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:23.521934 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:23.522089 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetState
	I0318 13:49:23.523696 1157708 fix.go:112] recreateIfNeeded on old-k8s-version-909137: state=Stopped err=<nil>
	I0318 13:49:23.523738 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	W0318 13:49:23.523894 1157708 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:23.526253 1157708 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-909137" ...
	I0318 13:49:22.113291 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.113733 1157416 main.go:141] libmachine: (no-preload-537236) Found IP for machine: 192.168.39.7
	I0318 13:49:22.113753 1157416 main.go:141] libmachine: (no-preload-537236) Reserving static IP address...
	I0318 13:49:22.113787 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has current primary IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.114159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.114179 1157416 main.go:141] libmachine: (no-preload-537236) DBG | skip adding static IP to network mk-no-preload-537236 - found existing host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"}
	I0318 13:49:22.114192 1157416 main.go:141] libmachine: (no-preload-537236) Reserved static IP address: 192.168.39.7
	I0318 13:49:22.114201 1157416 main.go:141] libmachine: (no-preload-537236) Waiting for SSH to be available...
	I0318 13:49:22.114208 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Getting to WaitForSSH function...
	I0318 13:49:22.116603 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.116944 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.116971 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.117082 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH client type: external
	I0318 13:49:22.117153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa (-rw-------)
	I0318 13:49:22.117192 1157416 main.go:141] libmachine: (no-preload-537236) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:22.117212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | About to run SSH command:
	I0318 13:49:22.117236 1157416 main.go:141] libmachine: (no-preload-537236) DBG | exit 0
	I0318 13:49:22.240543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:22.240913 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetConfigRaw
	I0318 13:49:22.241611 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.244016 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244273 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.244302 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244506 1157416 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/config.json ...
	I0318 13:49:22.244729 1157416 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:22.244750 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:22.244947 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.246869 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247160 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.247198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247246 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.247401 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247546 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247722 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.247893 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.248160 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.248174 1157416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:22.353134 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:22.353164 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353435 1157416 buildroot.go:166] provisioning hostname "no-preload-537236"
	I0318 13:49:22.353463 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353636 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.356058 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356463 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.356491 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356645 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.356846 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.356965 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.357068 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.357201 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.357415 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.357434 1157416 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-537236 && echo "no-preload-537236" | sudo tee /etc/hostname
	I0318 13:49:22.477651 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-537236
	
	I0318 13:49:22.477692 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.480537 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.480876 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.480905 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.481135 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.481342 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481520 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481676 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.481887 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.482066 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.482082 1157416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-537236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-537236/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-537236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:22.599489 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:22.599566 1157416 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:22.599596 1157416 buildroot.go:174] setting up certificates
	I0318 13:49:22.599609 1157416 provision.go:84] configureAuth start
	I0318 13:49:22.599624 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.599981 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.602425 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602800 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.602831 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602986 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.605036 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605331 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.605356 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605500 1157416 provision.go:143] copyHostCerts
	I0318 13:49:22.605589 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:22.605600 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:22.605665 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:22.605786 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:22.605795 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:22.605820 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:22.605895 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:22.605904 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:22.605927 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:22.606003 1157416 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.no-preload-537236 san=[127.0.0.1 192.168.39.7 localhost minikube no-preload-537236]
	I0318 13:49:22.810156 1157416 provision.go:177] copyRemoteCerts
	I0318 13:49:22.810249 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:22.810283 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.813018 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813343 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.813376 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813557 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.813743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.813890 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.814080 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:22.898886 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:22.926296 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 13:49:22.953260 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:49:22.981248 1157416 provision.go:87] duration metric: took 381.624842ms to configureAuth
	I0318 13:49:22.981281 1157416 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:22.981459 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:49:22.981573 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.984446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.984848 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.984885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.985061 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.985269 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985405 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985595 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.985728 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.985911 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.985925 1157416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:23.259439 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:23.259470 1157416 machine.go:97] duration metric: took 1.014725867s to provisionDockerMachine
	I0318 13:49:23.259483 1157416 start.go:293] postStartSetup for "no-preload-537236" (driver="kvm2")
	I0318 13:49:23.259518 1157416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:23.259553 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.259937 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:23.259976 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.262875 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263196 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.263228 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263403 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.263684 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.263861 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.264029 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.348815 1157416 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:23.353550 1157416 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:23.353582 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:23.353659 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:23.353759 1157416 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:23.353885 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:23.364831 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:23.391345 1157416 start.go:296] duration metric: took 131.846395ms for postStartSetup
	I0318 13:49:23.391396 1157416 fix.go:56] duration metric: took 22.070135111s for fixHost
	I0318 13:49:23.391423 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.394229 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.394583 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394685 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.394937 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395111 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395266 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.395433 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:23.395619 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:23.395631 1157416 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:23.501504 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769763.449975975
	
	I0318 13:49:23.501532 1157416 fix.go:216] guest clock: 1710769763.449975975
	I0318 13:49:23.501542 1157416 fix.go:229] Guest: 2024-03-18 13:49:23.449975975 +0000 UTC Remote: 2024-03-18 13:49:23.39140181 +0000 UTC m=+283.498114537 (delta=58.574165ms)
	I0318 13:49:23.501564 1157416 fix.go:200] guest clock delta is within tolerance: 58.574165ms
	I0318 13:49:23.501584 1157416 start.go:83] releasing machines lock for "no-preload-537236", held for 22.180386627s
	I0318 13:49:23.501612 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.501900 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:23.504693 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505130 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.505159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505331 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.505889 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506092 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506198 1157416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:23.506252 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.506317 1157416 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:23.506351 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.509104 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509414 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509465 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509625 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.509819 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509839 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509853 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510043 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510103 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.510207 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.510261 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510394 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510541 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.616831 1157416 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:23.624184 1157416 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:23.779709 1157416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:23.786535 1157416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:23.786594 1157416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:23.805716 1157416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:23.805743 1157416 start.go:494] detecting cgroup driver to use...
	I0318 13:49:23.805850 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:23.825572 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:23.842762 1157416 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:23.842817 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:23.859385 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:23.876416 1157416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:24.005995 1157416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:24.193107 1157416 docker.go:233] disabling docker service ...
	I0318 13:49:24.193173 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:24.212825 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:24.230448 1157416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:24.385445 1157416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:24.548640 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:24.564678 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:24.592528 1157416 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:49:24.592601 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.604303 1157416 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:24.604394 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.616123 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.627956 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.639194 1157416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:24.650789 1157416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:24.661390 1157416 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:24.661443 1157416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:24.677180 1157416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:24.687973 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:24.827386 1157416 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:24.978805 1157416 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:24.978898 1157416 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:24.985647 1157416 start.go:562] Will wait 60s for crictl version
	I0318 13:49:24.985735 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:24.990325 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:25.038948 1157416 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:25.039020 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.068855 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.107104 1157416 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 13:49:23.527811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .Start
	I0318 13:49:23.528000 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring networks are active...
	I0318 13:49:23.528714 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network default is active
	I0318 13:49:23.529036 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network mk-old-k8s-version-909137 is active
	I0318 13:49:23.529491 1157708 main.go:141] libmachine: (old-k8s-version-909137) Getting domain xml...
	I0318 13:49:23.530324 1157708 main.go:141] libmachine: (old-k8s-version-909137) Creating domain...
	I0318 13:49:24.765648 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting to get IP...
	I0318 13:49:24.766664 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:24.767122 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:24.767182 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:24.767081 1158507 retry.go:31] will retry after 250.785143ms: waiting for machine to come up
	I0318 13:49:25.019755 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.020238 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.020273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.020185 1158507 retry.go:31] will retry after 346.894257ms: waiting for machine to come up
	I0318 13:49:25.368815 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.369335 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.369372 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.369268 1158507 retry.go:31] will retry after 367.316359ms: waiting for machine to come up
	I0318 13:49:25.737835 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.738404 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.738438 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.738337 1158507 retry.go:31] will retry after 479.291041ms: waiting for machine to come up
	I0318 13:49:26.219103 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.219568 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.219599 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.219523 1158507 retry.go:31] will retry after 552.309382ms: waiting for machine to come up
	I0318 13:49:26.773363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.773905 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.773935 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.773857 1158507 retry.go:31] will retry after 703.087388ms: waiting for machine to come up
	I0318 13:49:27.478730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:27.479330 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:27.479363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:27.479270 1158507 retry.go:31] will retry after 1.136606935s: waiting for machine to come up
	I0318 13:49:25.108504 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:25.111416 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.111795 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:25.111827 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.112035 1157416 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:25.116688 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:25.131526 1157416 kubeadm.go:877] updating cluster {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:25.131663 1157416 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 13:49:25.131698 1157416 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:25.176340 1157416 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 13:49:25.176378 1157416 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:25.176474 1157416 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.176487 1157416 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.176524 1157416 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.176537 1157416 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.176592 1157416 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.176619 1157416 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.176773 1157416 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 13:49:25.176789 1157416 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178485 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.178486 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.178488 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.178480 1157416 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.178540 1157416 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 13:49:25.178911 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334172 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334873 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 13:49:25.338330 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.338825 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.340192 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.350053 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.356621 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.472528 1157416 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 13:49:25.472571 1157416 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.472627 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.630923 1157416 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 13:49:25.630996 1157416 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.631001 1157416 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 13:49:25.631042 1157416 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.630933 1157416 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 13:49:25.631089 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631102 1157416 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 13:49:25.631134 1157416 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.631107 1157416 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.631169 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631183 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631052 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631199 1157416 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 13:49:25.631220 1157416 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.631233 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.631264 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.642598 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.708001 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.708026 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708068 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.708003 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.708129 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708162 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.708225 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.708286 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.790492 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.790623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.804436 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804465 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804503 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 13:49:25.804532 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804583 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:25.804657 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 13:49:25.804684 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804720 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804768 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804801 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:25.807681 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 13:49:26.162719 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.887846 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.083277557s)
	I0318 13:49:27.887882 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.083274384s)
	I0318 13:49:27.887894 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 13:49:27.887916 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 13:49:27.887927 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.887944 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.083121634s)
	I0318 13:49:27.887971 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 13:49:27.887971 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.083181595s)
	I0318 13:49:27.887990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 13:49:27.888003 1157416 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.725256044s)
	I0318 13:49:27.888008 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.888040 1157416 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 13:49:27.888080 1157416 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.888114 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:27.893415 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:28.617273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:28.617711 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:28.617740 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:28.617665 1158507 retry.go:31] will retry after 947.818334ms: waiting for machine to come up
	I0318 13:49:29.566814 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:29.567157 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:29.567177 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:29.567121 1158507 retry.go:31] will retry after 1.328243934s: waiting for machine to come up
	I0318 13:49:30.897514 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:30.898041 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:30.898068 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:30.897988 1158507 retry.go:31] will retry after 2.213855703s: waiting for machine to come up
	I0318 13:49:30.272393 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.384351202s)
	I0318 13:49:30.272442 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 13:49:30.272459 1157416 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.379011748s)
	I0318 13:49:30.272477 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272508 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 13:49:30.272589 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:32.857821 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.585192694s)
	I0318 13:49:32.857907 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.585263486s)
	I0318 13:49:32.857990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 13:49:32.857918 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 13:49:32.858038 1157416 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:32.858097 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:33.113781 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:33.114303 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:33.114332 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:33.114245 1158507 retry.go:31] will retry after 2.075415123s: waiting for machine to come up
	I0318 13:49:35.191096 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:35.191631 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:35.191665 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:35.191582 1158507 retry.go:31] will retry after 3.520577528s: waiting for machine to come up
	I0318 13:49:36.677356 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.8192286s)
	I0318 13:49:36.677398 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 13:49:36.677423 1157416 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:36.677464 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:38.844843 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.167353366s)
	I0318 13:49:38.844895 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 13:49:38.844933 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.845020 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.713777 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:38.714129 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:38.714242 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:38.714143 1158507 retry.go:31] will retry after 3.46520277s: waiting for machine to come up
	I0318 13:49:42.181399 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181856 1157708 main.go:141] libmachine: (old-k8s-version-909137) Found IP for machine: 192.168.72.135
	I0318 13:49:42.181888 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has current primary IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181897 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserving static IP address...
	I0318 13:49:42.182344 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.182387 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | skip adding static IP to network mk-old-k8s-version-909137 - found existing host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"}
	I0318 13:49:42.182424 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserved static IP address: 192.168.72.135
	I0318 13:49:42.182453 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting for SSH to be available...
	I0318 13:49:42.182470 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Getting to WaitForSSH function...
	I0318 13:49:42.184589 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.184958 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.184999 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.185061 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH client type: external
	I0318 13:49:42.185120 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa (-rw-------)
	I0318 13:49:42.185162 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:42.185189 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | About to run SSH command:
	I0318 13:49:42.185204 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | exit 0
	I0318 13:49:42.312570 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:42.313005 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetConfigRaw
	I0318 13:49:42.313693 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.316497 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.316931 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.316965 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.317239 1157708 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json ...
	I0318 13:49:42.317442 1157708 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:42.317462 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:42.317688 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.320076 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320444 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.320485 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320655 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.320818 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.320980 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.321093 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.321257 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.321510 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.321528 1157708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:42.433138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:42.433186 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433524 1157708 buildroot.go:166] provisioning hostname "old-k8s-version-909137"
	I0318 13:49:42.433558 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433808 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.436869 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437230 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.437264 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437506 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.437739 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.437915 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.438092 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.438285 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.438513 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.438534 1157708 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-909137 && echo "old-k8s-version-909137" | sudo tee /etc/hostname
	I0318 13:49:42.560410 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-909137
	
	I0318 13:49:42.560439 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.563304 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563637 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.563673 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563837 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.564053 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564236 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564377 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.564581 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.564802 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.564820 1157708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-909137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-909137/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-909137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:42.687138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:42.687173 1157708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:42.687199 1157708 buildroot.go:174] setting up certificates
	I0318 13:49:42.687211 1157708 provision.go:84] configureAuth start
	I0318 13:49:42.687223 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.687600 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.690738 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.691179 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691316 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.693730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694070 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.694092 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694255 1157708 provision.go:143] copyHostCerts
	I0318 13:49:42.694336 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:42.694350 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:42.694422 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:42.694597 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:42.694614 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:42.694652 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:42.694747 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:42.694756 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:42.694775 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:42.694823 1157708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-909137 san=[127.0.0.1 192.168.72.135 localhost minikube old-k8s-version-909137]
	I0318 13:49:42.920182 1157708 provision.go:177] copyRemoteCerts
	I0318 13:49:42.920255 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:42.920295 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.923074 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923374 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.923408 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923533 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.923755 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.923957 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.924095 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.649771 1157887 start.go:364] duration metric: took 4m1.881584436s to acquireMachinesLock for "default-k8s-diff-port-569210"
	I0318 13:49:43.649850 1157887 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:43.649868 1157887 fix.go:54] fixHost starting: 
	I0318 13:49:43.650335 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:43.650378 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:43.668606 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0318 13:49:43.669107 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:43.669721 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:49:43.669755 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:43.670092 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:43.670269 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:49:43.670427 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:49:43.671973 1157887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-569210: state=Stopped err=<nil>
	I0318 13:49:43.672021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	W0318 13:49:43.672150 1157887 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:43.673832 1157887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-569210" ...
	I0318 13:49:40.621208 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.776156882s)
	I0318 13:49:40.621252 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 13:49:40.621281 1157416 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:40.621322 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:41.582256 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 13:49:41.582316 1157416 cache_images.go:123] Successfully loaded all cached images
	I0318 13:49:41.582324 1157416 cache_images.go:92] duration metric: took 16.405930257s to LoadCachedImages
	I0318 13:49:41.582341 1157416 kubeadm.go:928] updating node { 192.168.39.7 8443 v1.29.0-rc.2 crio true true} ...
	I0318 13:49:41.582550 1157416 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-537236 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:41.582663 1157416 ssh_runner.go:195] Run: crio config
	I0318 13:49:41.635043 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:41.635074 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:41.635093 1157416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:41.635128 1157416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-537236 NodeName:no-preload-537236 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:49:41.635322 1157416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-537236"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:41.635446 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 13:49:41.647072 1157416 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:41.647148 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:41.657448 1157416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0318 13:49:41.675819 1157416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 13:49:41.693989 1157416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0318 13:49:41.714954 1157416 ssh_runner.go:195] Run: grep 192.168.39.7	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:41.719161 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:41.732228 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:41.871286 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:41.892827 1157416 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236 for IP: 192.168.39.7
	I0318 13:49:41.892850 1157416 certs.go:194] generating shared ca certs ...
	I0318 13:49:41.892868 1157416 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:41.893054 1157416 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:41.893110 1157416 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:41.893125 1157416 certs.go:256] generating profile certs ...
	I0318 13:49:41.893246 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/client.key
	I0318 13:49:41.893317 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key.844e83a6
	I0318 13:49:41.893366 1157416 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key
	I0318 13:49:41.893482 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:41.893518 1157416 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:41.893528 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:41.893552 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:41.893573 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:41.893594 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:41.893628 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:41.894503 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:41.942278 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:41.978436 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:42.007161 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:42.036410 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:49:42.073179 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:42.098201 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:42.131599 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:42.159159 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:42.186290 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:42.214362 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:42.241240 1157416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:42.260511 1157416 ssh_runner.go:195] Run: openssl version
	I0318 13:49:42.267047 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:42.278582 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283566 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283609 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.289658 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:42.300954 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:42.312828 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319182 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319251 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.325767 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:42.337544 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:42.349053 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354197 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354249 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.361200 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:42.374825 1157416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:42.380098 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:42.387161 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:42.393702 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:42.400193 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:42.406243 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:42.412423 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:42.418599 1157416 kubeadm.go:391] StartCluster: {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:42.418747 1157416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:42.418785 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.468980 1157416 cri.go:89] found id: ""
	I0318 13:49:42.469088 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:42.481101 1157416 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:42.481130 1157416 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:42.481137 1157416 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:42.481190 1157416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:42.493014 1157416 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:42.494041 1157416 kubeconfig.go:125] found "no-preload-537236" server: "https://192.168.39.7:8443"
	I0318 13:49:42.496519 1157416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:42.507415 1157416 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.7
	I0318 13:49:42.507448 1157416 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:42.507460 1157416 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:42.507513 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.554791 1157416 cri.go:89] found id: ""
	I0318 13:49:42.554859 1157416 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:42.574054 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:42.584928 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:42.584955 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:42.585009 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:42.594987 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:42.595045 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:42.605058 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:42.614968 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:42.615042 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:42.625169 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.634838 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:42.634905 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.644785 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:42.654196 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:42.654254 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:42.663757 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:42.673956 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:42.792913 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:43.799012 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.006050828s)
	I0318 13:49:43.799075 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.061808 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.189349 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.329800 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:44.329897 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:44.829990 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:43.007024 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:43.033952 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 13:49:43.060218 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:49:43.086087 1157708 provision.go:87] duration metric: took 398.861833ms to configureAuth
	I0318 13:49:43.086116 1157708 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:43.086326 1157708 config.go:182] Loaded profile config "old-k8s-version-909137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 13:49:43.086442 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.089200 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089534 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.089562 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089758 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.089965 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090134 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090286 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.090501 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.090718 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.090744 1157708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:43.401681 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:43.401715 1157708 machine.go:97] duration metric: took 1.084258164s to provisionDockerMachine
	I0318 13:49:43.401728 1157708 start.go:293] postStartSetup for "old-k8s-version-909137" (driver="kvm2")
	I0318 13:49:43.401739 1157708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:43.401759 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.402073 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:43.402116 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.404775 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405164 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.405192 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405335 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.405525 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.405740 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.405884 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.493000 1157708 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:43.497705 1157708 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:43.497740 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:43.497818 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:43.497931 1157708 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:43.498058 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:43.509185 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:43.535401 1157708 start.go:296] duration metric: took 133.657179ms for postStartSetup
	I0318 13:49:43.535454 1157708 fix.go:56] duration metric: took 20.033670705s for fixHost
	I0318 13:49:43.535482 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.538464 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.538964 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.538998 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.539178 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.539386 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539528 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539702 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.539899 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.540120 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.540133 1157708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:43.649578 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769783.596310102
	
	I0318 13:49:43.649610 1157708 fix.go:216] guest clock: 1710769783.596310102
	I0318 13:49:43.649621 1157708 fix.go:229] Guest: 2024-03-18 13:49:43.596310102 +0000 UTC Remote: 2024-03-18 13:49:43.535459129 +0000 UTC m=+270.592972067 (delta=60.850973ms)
	I0318 13:49:43.649656 1157708 fix.go:200] guest clock delta is within tolerance: 60.850973ms
	I0318 13:49:43.649663 1157708 start.go:83] releasing machines lock for "old-k8s-version-909137", held for 20.147918331s
	I0318 13:49:43.649689 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.650002 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:43.652712 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653114 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.653148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653278 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.653873 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654112 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654198 1157708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:43.654264 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.654333 1157708 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:43.654369 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.657281 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657390 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657741 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.657830 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657855 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657918 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.658016 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658065 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.658199 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658245 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658326 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.658411 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658574 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.737787 1157708 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:43.769157 1157708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:43.920376 1157708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:43.928165 1157708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:43.928253 1157708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:43.946102 1157708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:43.946133 1157708 start.go:494] detecting cgroup driver to use...
	I0318 13:49:43.946210 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:43.963482 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:43.978540 1157708 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:43.978613 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:43.999525 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:44.021242 1157708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:44.198165 1157708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:44.363408 1157708 docker.go:233] disabling docker service ...
	I0318 13:49:44.363474 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:44.383527 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:44.398888 1157708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:44.547711 1157708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:44.662762 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:44.678786 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:44.702931 1157708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 13:49:44.703004 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.721453 1157708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:44.721519 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.739487 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.757379 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.777508 1157708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:44.798788 1157708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:44.814280 1157708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:44.814383 1157708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:44.836507 1157708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:44.852614 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:44.994352 1157708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:45.184815 1157708 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:45.184907 1157708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:45.190649 1157708 start.go:562] Will wait 60s for crictl version
	I0318 13:49:45.190724 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:45.195265 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:45.242737 1157708 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:45.242850 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.288154 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.331441 1157708 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 13:49:43.675531 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Start
	I0318 13:49:43.675763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring networks are active...
	I0318 13:49:43.676642 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network default is active
	I0318 13:49:43.677014 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network mk-default-k8s-diff-port-569210 is active
	I0318 13:49:43.677510 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Getting domain xml...
	I0318 13:49:43.678319 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Creating domain...
	I0318 13:49:45.002977 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting to get IP...
	I0318 13:49:45.003870 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004406 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004499 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.004392 1158648 retry.go:31] will retry after 294.950888ms: waiting for machine to come up
	I0318 13:49:45.301264 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301835 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301863 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.301747 1158648 retry.go:31] will retry after 291.810051ms: waiting for machine to come up
	I0318 13:49:45.595571 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596720 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596832 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.596786 1158648 retry.go:31] will retry after 390.232445ms: waiting for machine to come up
	I0318 13:49:45.988661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989534 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.989393 1158648 retry.go:31] will retry after 487.148784ms: waiting for machine to come up
	I0318 13:49:46.477982 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478667 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478701 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.478600 1158648 retry.go:31] will retry after 474.795485ms: waiting for machine to come up
	I0318 13:49:45.332975 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:45.336274 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336701 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:45.336753 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336985 1157708 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:45.343147 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:45.361840 1157708 kubeadm.go:877] updating cluster {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:45.361982 1157708 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:49:45.362040 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:45.419490 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:45.419587 1157708 ssh_runner.go:195] Run: which lz4
	I0318 13:49:45.424689 1157708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:49:45.431110 1157708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:49:45.431155 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 13:49:47.510385 1157708 crio.go:444] duration metric: took 2.085724633s to copy over tarball
	I0318 13:49:47.510483 1157708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:49:45.330925 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:45.364854 1157416 api_server.go:72] duration metric: took 1.035057096s to wait for apiserver process to appear ...
	I0318 13:49:45.364883 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:49:45.364927 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:45.365577 1157416 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": dial tcp 192.168.39.7:8443: connect: connection refused
	I0318 13:49:45.865126 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.135799 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.135840 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.135862 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.154112 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.154142 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.365566 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.375812 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.375862 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:49.865027 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.873132 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.873176 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.365178 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.371461 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.371506 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.865038 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.870329 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.870383 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:51.365030 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:51.370284 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:49:51.379599 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:49:51.379633 1157416 api_server.go:131] duration metric: took 6.014741397s to wait for apiserver health ...
	I0318 13:49:51.379645 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:51.379654 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:51.582399 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:49:46.955128 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955620 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955649 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.955579 1158648 retry.go:31] will retry after 817.278037ms: waiting for machine to come up
	I0318 13:49:47.774954 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775449 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775480 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:47.775391 1158648 retry.go:31] will retry after 1.032655883s: waiting for machine to come up
	I0318 13:49:48.810156 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810699 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810730 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:48.810644 1158648 retry.go:31] will retry after 1.1441145s: waiting for machine to come up
	I0318 13:49:49.956702 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957179 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957214 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:49.957105 1158648 retry.go:31] will retry after 1.428592019s: waiting for machine to come up
	I0318 13:49:51.387025 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387627 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387660 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:51.387555 1158648 retry.go:31] will retry after 2.266795202s: waiting for machine to come up
	I0318 13:49:50.947045 1157708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.436514023s)
	I0318 13:49:50.947084 1157708 crio.go:451] duration metric: took 3.436661543s to extract the tarball
	I0318 13:49:50.947095 1157708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:49:51.007406 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:51.048060 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:51.048091 1157708 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:51.048181 1157708 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.048228 1157708 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.048287 1157708 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.048346 1157708 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 13:49:51.048398 1157708 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.048432 1157708 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.048232 1157708 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.048183 1157708 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.049960 1157708 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.050268 1157708 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.050288 1157708 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.050355 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.050594 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.050627 1157708 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 13:49:51.050584 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.051230 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.219906 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.220734 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.235283 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.236445 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.246700 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 13:49:51.251299 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.311054 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.311292 1157708 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 13:49:51.311336 1157708 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.311389 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.343594 1157708 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 13:49:51.343649 1157708 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.343739 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.391608 1157708 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 13:49:51.391657 1157708 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.391706 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.448987 1157708 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 13:49:51.449029 1157708 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 13:49:51.449058 1157708 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.449061 1157708 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 13:49:51.449088 1157708 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.449103 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449035 1157708 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 13:49:51.449135 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.449178 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449207 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.449245 1157708 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 13:49:51.449267 1157708 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.449317 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449210 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449223 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.469614 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.469613 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.562455 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 13:49:51.562506 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.564170 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 13:49:51.564269 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 13:49:51.578471 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 13:49:51.615689 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 13:49:51.615708 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 13:49:51.657287 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 13:49:51.657361 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 13:49:51.956746 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:52.106933 1157708 cache_images.go:92] duration metric: took 1.058823514s to LoadCachedImages
	W0318 13:49:52.107046 1157708 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0318 13:49:52.107064 1157708 kubeadm.go:928] updating node { 192.168.72.135 8443 v1.20.0 crio true true} ...
	I0318 13:49:52.107259 1157708 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-909137 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:52.107348 1157708 ssh_runner.go:195] Run: crio config
	I0318 13:49:52.163493 1157708 cni.go:84] Creating CNI manager for ""
	I0318 13:49:52.163526 1157708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:52.163546 1157708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:52.163572 1157708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.135 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-909137 NodeName:old-k8s-version-909137 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 13:49:52.163740 1157708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-909137"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:52.163818 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 13:49:52.175668 1157708 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:52.175740 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:52.186745 1157708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 13:49:52.209877 1157708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:49:52.232921 1157708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 13:49:52.256571 1157708 ssh_runner.go:195] Run: grep 192.168.72.135	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:52.262776 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:52.278435 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:52.422705 1157708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:52.443710 1157708 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137 for IP: 192.168.72.135
	I0318 13:49:52.443740 1157708 certs.go:194] generating shared ca certs ...
	I0318 13:49:52.443760 1157708 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:52.443951 1157708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:52.444009 1157708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:52.444023 1157708 certs.go:256] generating profile certs ...
	I0318 13:49:52.444155 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.key
	I0318 13:49:52.444239 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key.e9806bd6
	I0318 13:49:52.444303 1157708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key
	I0318 13:49:52.444492 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:52.444532 1157708 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:52.444548 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:52.444585 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:52.444633 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:52.444672 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:52.444729 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:52.445363 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:52.506720 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:52.550057 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:52.586845 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:52.627933 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 13:49:52.681479 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:52.722052 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:52.755021 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:52.782181 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:52.808269 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:52.835041 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:52.863776 1157708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:52.883579 1157708 ssh_runner.go:195] Run: openssl version
	I0318 13:49:52.889846 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:52.902288 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908241 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908302 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.915392 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:52.928374 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:52.941444 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946463 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946514 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.953447 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:52.966231 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:52.977986 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982748 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982809 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.988715 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:51.626774 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:49:51.642685 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:49:51.669902 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:49:51.759474 1157416 system_pods.go:59] 8 kube-system pods found
	I0318 13:49:51.759519 1157416 system_pods.go:61] "coredns-76f75df574-kxzfm" [d0aad76d-f135-4d4a-a2f5-117707b4b2f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:49:51.759530 1157416 system_pods.go:61] "etcd-no-preload-537236" [d02ad01c-1b16-4b97-be18-237b1cbfe3aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:49:51.759539 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [00b05050-229b-47f4-9af2-12be1711200a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:49:51.759548 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [3e7b86df-4111-4bd9-8925-a22cf12e10ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:49:51.759552 1157416 system_pods.go:61] "kube-proxy-5dspp" [adee19a0-eeb6-438f-a55d-30f1e1d87ef6] Running
	I0318 13:49:51.759557 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [17628d51-80f5-4985-8ddb-151cab8f8c5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:49:51.759562 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-hhh5m" [282de489-beee-47a9-bd29-5da43cf70146] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:49:51.759565 1157416 system_pods.go:61] "storage-provisioner" [97d3de68-0863-4bba-9cb1-2ce98d791935] Running
	I0318 13:49:51.759578 1157416 system_pods.go:74] duration metric: took 89.654007ms to wait for pod list to return data ...
	I0318 13:49:51.759591 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:49:51.764164 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:49:51.764191 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:49:51.764204 1157416 node_conditions.go:105] duration metric: took 4.607295ms to run NodePressure ...
	I0318 13:49:51.764227 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:52.645812 1157416 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653573 1157416 kubeadm.go:733] kubelet initialised
	I0318 13:49:52.653602 1157416 kubeadm.go:734] duration metric: took 7.75557ms waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653614 1157416 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:49:52.662179 1157416 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:54.678656 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:53.656476 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656913 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656943 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:53.656870 1158648 retry.go:31] will retry after 2.341702781s: waiting for machine to come up
	I0318 13:49:56.001662 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002163 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002188 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:56.002106 1158648 retry.go:31] will retry after 2.885262489s: waiting for machine to come up
	I0318 13:49:53.000141 1157708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:53.005021 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:53.011156 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:53.018329 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:53.025687 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:53.032199 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:53.039048 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:53.045789 1157708 kubeadm.go:391] StartCluster: {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:53.045882 1157708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:53.045931 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.085682 1157708 cri.go:89] found id: ""
	I0318 13:49:53.085788 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:53.098063 1157708 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:53.098091 1157708 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:53.098098 1157708 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:53.098153 1157708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:53.109692 1157708 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:53.110853 1157708 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-909137" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:49:53.111862 1157708 kubeconfig.go:62] /home/jenkins/minikube-integration/18429-1106816/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-909137" cluster setting kubeconfig missing "old-k8s-version-909137" context setting]
	I0318 13:49:53.113334 1157708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:53.115135 1157708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:53.125910 1157708 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.135
	I0318 13:49:53.125949 1157708 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:53.125965 1157708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:53.126029 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.172181 1157708 cri.go:89] found id: ""
	I0318 13:49:53.172268 1157708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:53.189585 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:53.200744 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:53.200768 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:53.200811 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:53.211176 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:53.211250 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:53.221744 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:53.231342 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:53.231404 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:53.242162 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.252408 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:53.252480 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.262690 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:53.272829 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:53.272903 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:53.283287 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:53.294124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:53.437482 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.297415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.588919 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.758204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.863030 1157708 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:54.863140 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.363708 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.863301 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.364064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.863896 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.363240 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.212652 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:57.669562 1157416 pod_ready.go:92] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:57.669584 1157416 pod_ready.go:81] duration metric: took 5.007366512s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:57.669597 1157416 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176528 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:58.176557 1157416 pod_ready.go:81] duration metric: took 506.95201ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176570 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.888400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888706 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888742 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:58.888681 1158648 retry.go:31] will retry after 4.094701536s: waiting for machine to come up
	I0318 13:49:58.363294 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:58.864051 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.363586 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.863802 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.363862 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.864277 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.363381 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.864307 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.363278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.863315 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.309987 1157263 start.go:364] duration metric: took 57.988518292s to acquireMachinesLock for "embed-certs-173036"
	I0318 13:50:04.310046 1157263 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:50:04.310062 1157263 fix.go:54] fixHost starting: 
	I0318 13:50:04.310469 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:50:04.310506 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:50:04.330585 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0318 13:50:04.331049 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:50:04.331648 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:50:04.331684 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:50:04.332066 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:50:04.332316 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:04.332513 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:50:04.334091 1157263 fix.go:112] recreateIfNeeded on embed-certs-173036: state=Stopped err=<nil>
	I0318 13:50:04.334117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	W0318 13:50:04.334299 1157263 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:50:04.336146 1157263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-173036" ...
	I0318 13:50:00.184168 1157416 pod_ready.go:102] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:01.183846 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:01.183872 1157416 pod_ready.go:81] duration metric: took 3.007292631s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:01.183884 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:03.206725 1157416 pod_ready.go:102] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:04.691357 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.691391 1157416 pod_ready.go:81] duration metric: took 3.507497259s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.691410 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696593 1157416 pod_ready.go:92] pod "kube-proxy-5dspp" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.696618 1157416 pod_ready.go:81] duration metric: took 5.198628ms for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696627 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.700977 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.700995 1157416 pod_ready.go:81] duration metric: took 4.36095ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.701006 1157416 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:02.985340 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985804 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has current primary IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985818 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Found IP for machine: 192.168.61.3
	I0318 13:50:02.985828 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserving static IP address...
	I0318 13:50:02.986233 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.986292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | skip adding static IP to network mk-default-k8s-diff-port-569210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"}
	I0318 13:50:02.986307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserved static IP address: 192.168.61.3
	I0318 13:50:02.986321 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for SSH to be available...
	I0318 13:50:02.986337 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Getting to WaitForSSH function...
	I0318 13:50:02.988609 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.988962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.988995 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.989209 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH client type: external
	I0318 13:50:02.989235 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa (-rw-------)
	I0318 13:50:02.989272 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:02.989293 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | About to run SSH command:
	I0318 13:50:02.989306 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | exit 0
	I0318 13:50:03.112557 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:03.112907 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetConfigRaw
	I0318 13:50:03.113605 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.116140 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116569 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.116599 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116858 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:50:03.117065 1157887 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:03.117091 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:03.117296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.119506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.119861 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.119891 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.120015 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.120212 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120429 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120608 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.120798 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.120995 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.121010 1157887 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:03.221645 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:03.221693 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.221990 1157887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-569210"
	I0318 13:50:03.222027 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.222257 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.225134 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225543 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.225568 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225714 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.226022 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226225 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.226595 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.226870 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.226893 1157887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-569210 && echo "default-k8s-diff-port-569210" | sudo tee /etc/hostname
	I0318 13:50:03.350362 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-569210
	
	I0318 13:50:03.350398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.353307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353700 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.353737 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353911 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.354111 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354283 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354413 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.354600 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.354805 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.354824 1157887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-569210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-569210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-569210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:03.471084 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:03.471120 1157887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:03.471159 1157887 buildroot.go:174] setting up certificates
	I0318 13:50:03.471229 1157887 provision.go:84] configureAuth start
	I0318 13:50:03.471247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.471576 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.474528 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.474918 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.474957 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.475210 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.477624 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.477910 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.477936 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.478118 1157887 provision.go:143] copyHostCerts
	I0318 13:50:03.478196 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:03.478213 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:03.478281 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:03.478424 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:03.478437 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:03.478466 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:03.478537 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:03.478548 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:03.478571 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:03.478640 1157887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-569210 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-569210 localhost minikube]
	I0318 13:50:03.600956 1157887 provision.go:177] copyRemoteCerts
	I0318 13:50:03.601028 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:03.601058 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.603986 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604437 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.604468 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604659 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.604922 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.605086 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.605260 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:03.688256 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 13:50:03.716748 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:03.744848 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:03.771601 1157887 provision.go:87] duration metric: took 300.358039ms to configureAuth
	I0318 13:50:03.771631 1157887 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:03.771893 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:03.771992 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.774410 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774725 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.774760 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774926 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.775099 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775456 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.775642 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.775872 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.775901 1157887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:04.068202 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:04.068242 1157887 machine.go:97] duration metric: took 951.160051ms to provisionDockerMachine
	I0318 13:50:04.068259 1157887 start.go:293] postStartSetup for "default-k8s-diff-port-569210" (driver="kvm2")
	I0318 13:50:04.068277 1157887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:04.068303 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.068677 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:04.068712 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.071619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.071974 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.072002 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.072148 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.072354 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.072519 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.072639 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.157469 1157887 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:04.162629 1157887 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:04.162655 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:04.162719 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:04.162810 1157887 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:04.162911 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:04.173898 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:04.204771 1157887 start.go:296] duration metric: took 136.495479ms for postStartSetup
	I0318 13:50:04.204814 1157887 fix.go:56] duration metric: took 20.554947186s for fixHost
	I0318 13:50:04.204839 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.207619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.207923 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.207951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.208088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.208296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208509 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208657 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.208801 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:04.208975 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:04.208988 1157887 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:04.309828 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769804.283357411
	
	I0318 13:50:04.309861 1157887 fix.go:216] guest clock: 1710769804.283357411
	I0318 13:50:04.309871 1157887 fix.go:229] Guest: 2024-03-18 13:50:04.283357411 +0000 UTC Remote: 2024-03-18 13:50:04.204818975 +0000 UTC m=+262.583280441 (delta=78.538436ms)
	I0318 13:50:04.309898 1157887 fix.go:200] guest clock delta is within tolerance: 78.538436ms
	I0318 13:50:04.309904 1157887 start.go:83] releasing machines lock for "default-k8s-diff-port-569210", held for 20.660081187s
	I0318 13:50:04.309933 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.310247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:04.313302 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313747 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.313777 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313956 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314591 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314792 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314878 1157887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:04.314934 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.315014 1157887 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:04.315059 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.318021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318056 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318438 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318474 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318500 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318518 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318879 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.318962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.319052 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319110 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319229 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.319286 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.426710 1157887 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:04.433482 1157887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:04.590331 1157887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:04.598896 1157887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:04.598974 1157887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:04.617060 1157887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:04.617095 1157887 start.go:494] detecting cgroup driver to use...
	I0318 13:50:04.617190 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:04.633902 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:04.648705 1157887 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:04.648759 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:04.665516 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:04.681326 1157887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:04.798310 1157887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:04.972066 1157887 docker.go:233] disabling docker service ...
	I0318 13:50:04.972133 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:04.995498 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:05.014901 1157887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:05.158158 1157887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:05.309791 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:05.324965 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:05.346489 1157887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:05.346595 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.358753 1157887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:05.358823 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.374416 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.394228 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.406975 1157887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:05.420201 1157887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:05.432405 1157887 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:05.432479 1157887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:05.449386 1157887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:05.461081 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:05.607102 1157887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:05.776152 1157887 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:05.776267 1157887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:05.782168 1157887 start.go:562] Will wait 60s for crictl version
	I0318 13:50:05.782247 1157887 ssh_runner.go:195] Run: which crictl
	I0318 13:50:05.787932 1157887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:05.831304 1157887 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:05.831399 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.865410 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.908406 1157887 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:05.909651 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:05.912855 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913213 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:05.913256 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913470 1157887 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:05.918362 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:05.933755 1157887 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:05.933926 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:05.934002 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:05.978920 1157887 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:05.978998 1157887 ssh_runner.go:195] Run: which lz4
	I0318 13:50:05.983751 1157887 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:05.988862 1157887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:05.988895 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:03.363591 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:03.864049 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.363310 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.863306 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.363706 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.863618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.364183 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.863776 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.863261 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.337631 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Start
	I0318 13:50:04.337838 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring networks are active...
	I0318 13:50:04.338615 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network default is active
	I0318 13:50:04.338978 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network mk-embed-certs-173036 is active
	I0318 13:50:04.339444 1157263 main.go:141] libmachine: (embed-certs-173036) Getting domain xml...
	I0318 13:50:04.340295 1157263 main.go:141] libmachine: (embed-certs-173036) Creating domain...
	I0318 13:50:05.616437 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting to get IP...
	I0318 13:50:05.617646 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.618096 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.618168 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.618075 1158806 retry.go:31] will retry after 234.69885ms: waiting for machine to come up
	I0318 13:50:05.854749 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.855365 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.855401 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.855310 1158806 retry.go:31] will retry after 324.015594ms: waiting for machine to come up
	I0318 13:50:06.181178 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.182089 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.182123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.182038 1158806 retry.go:31] will retry after 456.172304ms: waiting for machine to come up
	I0318 13:50:06.639827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.640288 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.640349 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.640244 1158806 retry.go:31] will retry after 561.082549ms: waiting for machine to come up
	I0318 13:50:07.203208 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.203798 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.203825 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.203696 1158806 retry.go:31] will retry after 633.905437ms: waiting for machine to come up
	I0318 13:50:07.839205 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.839760 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.839792 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.839698 1158806 retry.go:31] will retry after 629.254629ms: waiting for machine to come up
	I0318 13:50:08.470625 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:08.471073 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:08.471105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:08.471021 1158806 retry.go:31] will retry after 771.526268ms: waiting for machine to come up
	I0318 13:50:06.709604 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:09.208197 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:08.056220 1157887 crio.go:444] duration metric: took 2.072501191s to copy over tarball
	I0318 13:50:08.056361 1157887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:10.763501 1157887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.707101479s)
	I0318 13:50:10.763560 1157887 crio.go:451] duration metric: took 2.707303654s to extract the tarball
	I0318 13:50:10.763570 1157887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:10.808643 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:10.860178 1157887 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:10.860218 1157887 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:10.860229 1157887 kubeadm.go:928] updating node { 192.168.61.3 8444 v1.28.4 crio true true} ...
	I0318 13:50:10.860381 1157887 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-569210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:10.860455 1157887 ssh_runner.go:195] Run: crio config
	I0318 13:50:10.918077 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:10.918109 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:10.918124 1157887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:10.918154 1157887 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-569210 NodeName:default-k8s-diff-port-569210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:10.918362 1157887 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-569210"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:10.918457 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:10.930573 1157887 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:10.930639 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:10.941181 1157887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0318 13:50:10.960048 1157887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:10.980367 1157887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0318 13:50:11.001607 1157887 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:11.006363 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:11.020871 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:11.164152 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:11.185025 1157887 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210 for IP: 192.168.61.3
	I0318 13:50:11.185060 1157887 certs.go:194] generating shared ca certs ...
	I0318 13:50:11.185096 1157887 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:11.185277 1157887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:11.185342 1157887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:11.185356 1157887 certs.go:256] generating profile certs ...
	I0318 13:50:11.185464 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/client.key
	I0318 13:50:11.185541 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key.e15332a5
	I0318 13:50:11.185590 1157887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key
	I0318 13:50:11.185757 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:11.185799 1157887 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:11.185812 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:11.185841 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:11.185899 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:11.185945 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:11.185999 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:11.186853 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:11.221967 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:11.250180 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:11.287449 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:11.323521 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 13:50:11.360286 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:11.396947 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:11.426116 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:50:11.455183 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:11.483479 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:11.512975 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:11.548393 1157887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:11.569155 1157887 ssh_runner.go:195] Run: openssl version
	I0318 13:50:11.576084 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:11.589110 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594640 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594736 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.601473 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:11.615874 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:11.630380 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635808 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635886 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.644465 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:11.661509 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:08.364243 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:08.863539 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.364037 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.863422 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.363353 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.863485 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.363548 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.864070 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.243731 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:09.244146 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:09.244180 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:09.244104 1158806 retry.go:31] will retry after 1.160252016s: waiting for machine to come up
	I0318 13:50:10.405805 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:10.406270 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:10.406310 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:10.406201 1158806 retry.go:31] will retry after 1.625913099s: waiting for machine to come up
	I0318 13:50:12.033202 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:12.033674 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:12.033712 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:12.033589 1158806 retry.go:31] will retry after 1.835793865s: waiting for machine to come up
	I0318 13:50:11.211241 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:13.710211 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:11.675340 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938009 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938089 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.944766 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:11.957959 1157887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:11.963524 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:11.971678 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:11.978601 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:11.985403 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:11.992159 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:11.998620 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:12.005209 1157887 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:12.005300 1157887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:12.005350 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.074518 1157887 cri.go:89] found id: ""
	I0318 13:50:12.074603 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:12.099031 1157887 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:12.099062 1157887 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:12.099070 1157887 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:12.099147 1157887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:12.111133 1157887 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:12.112779 1157887 kubeconfig.go:125] found "default-k8s-diff-port-569210" server: "https://192.168.61.3:8444"
	I0318 13:50:12.116521 1157887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:12.134902 1157887 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.3
	I0318 13:50:12.134964 1157887 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:12.135005 1157887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:12.135086 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.190100 1157887 cri.go:89] found id: ""
	I0318 13:50:12.190182 1157887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:12.211556 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:12.223095 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:12.223120 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:12.223173 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:50:12.235709 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:12.235780 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:12.248896 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:50:12.260212 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:12.260285 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:12.271424 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.283083 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:12.283143 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.294877 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:50:12.305629 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:12.305692 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:12.317395 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:12.328943 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:12.471901 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.400723 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.601149 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.677768 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.796413 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:13.796558 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.297639 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.797236 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.885767 1157887 api_server.go:72] duration metric: took 1.089353166s to wait for apiserver process to appear ...
	I0318 13:50:14.885801 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:14.885827 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:14.886464 1157887 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0318 13:50:15.386913 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:13.364111 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.863871 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.363958 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.863570 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.364185 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.863974 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.364010 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.863484 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.864149 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.871003 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:13.871443 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:13.871475 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:13.871398 1158806 retry.go:31] will retry after 2.53403994s: waiting for machine to come up
	I0318 13:50:16.407271 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:16.407728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:16.407775 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:16.407708 1158806 retry.go:31] will retry after 2.371916928s: waiting for machine to come up
	I0318 13:50:18.781468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:18.781866 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:18.781898 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:18.781809 1158806 retry.go:31] will retry after 3.250042198s: waiting for machine to come up
	I0318 13:50:17.204788 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.204828 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.204848 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.235957 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.236000 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.386349 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.393185 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.393220 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:17.886583 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.892087 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.892122 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.386820 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.406609 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:18.406658 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.886458 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.896097 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:50:18.905565 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:18.905603 1157887 api_server.go:131] duration metric: took 4.019792975s to wait for apiserver health ...
	I0318 13:50:18.905615 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:18.905624 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:18.907258 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:15.711910 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.209648 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.909133 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:18.944457 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:18.973831 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:18.984400 1157887 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:18.984436 1157887 system_pods.go:61] "coredns-5dd5756b68-hwsz5" [0a91f20c-3d3b-415c-b709-7898c606d830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:18.984447 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [64925324-9666-49ab-b849-ad9b7ce54891] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:18.984456 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [8409a63f-fbac-4bf9-b54b-5ac267a58206] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:18.984465 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [a2d7b983-c4aa-4c32-9391-babe90b0f102] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:18.984470 1157887 system_pods.go:61] "kube-proxy-v59ks" [39a4e73c-319d-4093-8781-ca7a1a48e005] Running
	I0318 13:50:18.984477 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [f24baa89-e33d-42ca-8f83-17c76a4cedcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:18.984488 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-2sb4m" [f3e533a7-9666-4b79-b9a9-26222422f242] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:18.984496 1157887 system_pods.go:61] "storage-provisioner" [864d0bb2-cbca-41ae-b9ec-89aced62dd08] Running
	I0318 13:50:18.984505 1157887 system_pods.go:74] duration metric: took 10.646849ms to wait for pod list to return data ...
	I0318 13:50:18.984519 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:18.989173 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:18.989201 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:18.989213 1157887 node_conditions.go:105] duration metric: took 4.685756ms to run NodePressure ...
	I0318 13:50:18.989231 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:19.229166 1157887 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237757 1157887 kubeadm.go:733] kubelet initialised
	I0318 13:50:19.237787 1157887 kubeadm.go:734] duration metric: took 8.591388ms waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237797 1157887 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:19.243530 1157887 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.253925 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253957 1157887 pod_ready.go:81] duration metric: took 10.403116ms for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.253969 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253978 1157887 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.265167 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265189 1157887 pod_ready.go:81] duration metric: took 11.202545ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.265200 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265206 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.273558 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273589 1157887 pod_ready.go:81] duration metric: took 8.37478ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.273603 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273615 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:21.280970 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.363366 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:18.863782 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.363987 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.863437 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.364050 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.863961 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.364126 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.863264 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.363519 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.033540 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:22.034056 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:22.034084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:22.034001 1158806 retry.go:31] will retry after 5.297432528s: waiting for machine to come up
	I0318 13:50:20.708189 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:22.708573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:24.708632 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.281625 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:25.780754 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.364019 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:23.864134 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.363510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.863263 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.364027 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.863203 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.364219 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.863262 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.363889 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.864113 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.335390 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335875 1157263 main.go:141] libmachine: (embed-certs-173036) Found IP for machine: 192.168.50.191
	I0318 13:50:27.335908 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has current primary IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335918 1157263 main.go:141] libmachine: (embed-certs-173036) Reserving static IP address...
	I0318 13:50:27.336311 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.336360 1157263 main.go:141] libmachine: (embed-certs-173036) Reserved static IP address: 192.168.50.191
	I0318 13:50:27.336380 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | skip adding static IP to network mk-embed-certs-173036 - found existing host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"}
	I0318 13:50:27.336394 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Getting to WaitForSSH function...
	I0318 13:50:27.336406 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting for SSH to be available...
	I0318 13:50:27.338627 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.338948 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.338983 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.339087 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH client type: external
	I0318 13:50:27.339177 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa (-rw-------)
	I0318 13:50:27.339212 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:27.339227 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | About to run SSH command:
	I0318 13:50:27.339244 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | exit 0
	I0318 13:50:27.468468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:27.468936 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetConfigRaw
	I0318 13:50:27.469699 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.472098 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472422 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.472446 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472714 1157263 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/config.json ...
	I0318 13:50:27.472955 1157263 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:27.472982 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:27.473196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.475516 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.475808 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.475831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.476041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.476252 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476414 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476537 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.476719 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.476899 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.476909 1157263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:27.589501 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:27.589532 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.589828 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:50:27.589862 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.590068 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.592650 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593005 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.593035 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593186 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.593375 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593546 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593713 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.593883 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.594058 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.594073 1157263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-173036 && echo "embed-certs-173036" | sudo tee /etc/hostname
	I0318 13:50:27.730406 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173036
	
	I0318 13:50:27.730437 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.733420 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.733857 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.733890 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.734058 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.734271 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734475 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734609 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.734764 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.734943 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.734960 1157263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-173036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-173036/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-173036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:27.860625 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:27.860679 1157263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:27.860777 1157263 buildroot.go:174] setting up certificates
	I0318 13:50:27.860790 1157263 provision.go:84] configureAuth start
	I0318 13:50:27.860810 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.861112 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.864215 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864667 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.864703 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864956 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.867381 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867690 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.867730 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867893 1157263 provision.go:143] copyHostCerts
	I0318 13:50:27.867963 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:27.867977 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:27.868048 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:27.868183 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:27.868198 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:27.868231 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:27.868307 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:27.868318 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:27.868372 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:27.868451 1157263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.embed-certs-173036 san=[127.0.0.1 192.168.50.191 embed-certs-173036 localhost minikube]
	I0318 13:50:28.001671 1157263 provision.go:177] copyRemoteCerts
	I0318 13:50:28.001742 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:28.001773 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.004389 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.004746 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.004777 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.005021 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.005214 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.005393 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.005558 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.095871 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 13:50:28.127356 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:28.157301 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:28.186185 1157263 provision.go:87] duration metric: took 325.374328ms to configureAuth
	I0318 13:50:28.186217 1157263 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:28.186424 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:28.186529 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.189135 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189532 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.189564 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189719 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.189933 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190127 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.190492 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.190654 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.190668 1157263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:28.473836 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:28.473875 1157263 machine.go:97] duration metric: took 1.000902962s to provisionDockerMachine
	I0318 13:50:28.473887 1157263 start.go:293] postStartSetup for "embed-certs-173036" (driver="kvm2")
	I0318 13:50:28.473898 1157263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:28.473914 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.474270 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:28.474307 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.477165 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477571 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.477619 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477756 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.477966 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.478135 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.478296 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.568988 1157263 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:28.573759 1157263 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:28.573782 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:28.573839 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:28.573909 1157263 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:28.573989 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:28.584049 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:28.610999 1157263 start.go:296] duration metric: took 137.09711ms for postStartSetup
	I0318 13:50:28.611043 1157263 fix.go:56] duration metric: took 24.300980779s for fixHost
	I0318 13:50:28.611066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.614123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614582 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.614628 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614795 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.614999 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615124 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615255 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.615427 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.615617 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.615631 1157263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:28.729856 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769828.678644307
	
	I0318 13:50:28.729894 1157263 fix.go:216] guest clock: 1710769828.678644307
	I0318 13:50:28.729913 1157263 fix.go:229] Guest: 2024-03-18 13:50:28.678644307 +0000 UTC Remote: 2024-03-18 13:50:28.611048079 +0000 UTC m=+364.845703282 (delta=67.596228ms)
	I0318 13:50:28.729932 1157263 fix.go:200] guest clock delta is within tolerance: 67.596228ms
	I0318 13:50:28.729937 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 24.419922158s
	I0318 13:50:28.729958 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.730241 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:28.732831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733196 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.733249 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733406 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.733875 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734172 1157263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:28.734248 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.734330 1157263 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:28.734376 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.737014 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737200 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737444 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737470 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737611 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737694 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737721 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737918 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737926 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738195 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738292 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.738357 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738466 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:26.708824 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.209974 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:28.818695 1157263 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:28.844173 1157263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:28.995017 1157263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:29.002150 1157263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:29.002251 1157263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:29.021165 1157263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:29.021200 1157263 start.go:494] detecting cgroup driver to use...
	I0318 13:50:29.021286 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:29.039060 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:29.053451 1157263 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:29.053521 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:29.069721 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:29.085285 1157263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:29.201273 1157263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:29.356314 1157263 docker.go:233] disabling docker service ...
	I0318 13:50:29.356406 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:29.374159 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:29.390280 1157263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:29.542126 1157263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:29.692068 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:29.707760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:29.735684 1157263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:29.735753 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.751291 1157263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:29.751365 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.763159 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.774837 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.787142 1157263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:29.799773 1157263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:29.810620 1157263 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:29.810691 1157263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:29.826816 1157263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:29.842059 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:29.985531 1157263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:30.147122 1157263 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:30.147191 1157263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:30.152406 1157263 start.go:562] Will wait 60s for crictl version
	I0318 13:50:30.152468 1157263 ssh_runner.go:195] Run: which crictl
	I0318 13:50:30.157019 1157263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:30.199810 1157263 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:30.199889 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.232028 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.270484 1157263 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:27.781584 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.795969 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:31.282868 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.282899 1157887 pod_ready.go:81] duration metric: took 12.009270978s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.282910 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290886 1157887 pod_ready.go:92] pod "kube-proxy-v59ks" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.290917 1157887 pod_ready.go:81] duration metric: took 7.99936ms for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290931 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300197 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.300235 1157887 pod_ready.go:81] duration metric: took 9.294232ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300254 1157887 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:28.364069 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:28.863405 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.363996 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.863574 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.363749 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.863564 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.363250 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.863320 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.363894 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.864166 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.271939 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:30.275084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.275682 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:30.275728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.276045 1157263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:30.282421 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:30.299013 1157263 kubeadm.go:877] updating cluster {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:30.299280 1157263 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:30.299364 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:30.349617 1157263 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:30.349720 1157263 ssh_runner.go:195] Run: which lz4
	I0318 13:50:30.354659 1157263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:30.359861 1157263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:30.359903 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:32.362707 1157263 crio.go:444] duration metric: took 2.008087158s to copy over tarball
	I0318 13:50:32.362796 1157263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:31.210766 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.709661 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.308081 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:35.309291 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:33.864021 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.363963 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.864011 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.364122 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.863559 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.364154 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.364232 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.863934 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.265803 1157263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.902966349s)
	I0318 13:50:35.265827 1157263 crio.go:451] duration metric: took 2.903086385s to extract the tarball
	I0318 13:50:35.265835 1157263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:35.313875 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:35.378361 1157263 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:35.378392 1157263 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:35.378408 1157263 kubeadm.go:928] updating node { 192.168.50.191 8443 v1.28.4 crio true true} ...
	I0318 13:50:35.378551 1157263 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-173036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:35.378648 1157263 ssh_runner.go:195] Run: crio config
	I0318 13:50:35.443472 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:35.443501 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:35.443520 1157263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:35.443551 1157263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.191 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-173036 NodeName:embed-certs-173036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:35.443730 1157263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-173036"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:35.443809 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:35.455284 1157263 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:35.455352 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:35.465886 1157263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 13:50:35.487345 1157263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:35.507361 1157263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 13:50:35.528055 1157263 ssh_runner.go:195] Run: grep 192.168.50.191	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:35.533287 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:35.548295 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:35.684165 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:35.703884 1157263 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036 for IP: 192.168.50.191
	I0318 13:50:35.703910 1157263 certs.go:194] generating shared ca certs ...
	I0318 13:50:35.703927 1157263 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:35.704117 1157263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:35.704186 1157263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:35.704200 1157263 certs.go:256] generating profile certs ...
	I0318 13:50:35.704292 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/client.key
	I0318 13:50:35.704406 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key.527b6b30
	I0318 13:50:35.704472 1157263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key
	I0318 13:50:35.704637 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:35.704680 1157263 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:35.704694 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:35.704729 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:35.704763 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:35.704796 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:35.704857 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:35.705836 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:35.768912 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:35.830564 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:35.877813 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:35.916756 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 13:50:35.948397 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:35.980450 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:36.009626 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:50:36.040155 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:36.068885 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:36.098638 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:36.128423 1157263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:36.149584 1157263 ssh_runner.go:195] Run: openssl version
	I0318 13:50:36.156347 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:36.169729 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175367 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175438 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.181995 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:36.193987 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:36.206444 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212355 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212442 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.219042 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:36.231882 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:36.244590 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250443 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250511 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.257713 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:36.271026 1157263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:36.276902 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:36.285465 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:36.294274 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:36.302415 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:36.310867 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:36.318931 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:36.327627 1157263 kubeadm.go:391] StartCluster: {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:36.327781 1157263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:36.327843 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.376644 1157263 cri.go:89] found id: ""
	I0318 13:50:36.376741 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:36.389506 1157263 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:36.389528 1157263 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:36.389533 1157263 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:36.389640 1157263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:36.401386 1157263 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:36.402631 1157263 kubeconfig.go:125] found "embed-certs-173036" server: "https://192.168.50.191:8443"
	I0318 13:50:36.404833 1157263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:36.416975 1157263 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.191
	I0318 13:50:36.417026 1157263 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:36.417041 1157263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:36.417106 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.458072 1157263 cri.go:89] found id: ""
	I0318 13:50:36.458162 1157263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:36.476557 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:36.487765 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:36.487791 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:36.487857 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:50:36.498903 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:36.498982 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:36.510205 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:50:36.520423 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:36.520476 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:36.531864 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.542058 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:36.542131 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.552807 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:50:36.562840 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:36.562915 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:36.573581 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:36.583760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:36.719884 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.681007 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.914386 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.993967 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:38.101144 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:38.101261 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.602138 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.711725 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:37.807508 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:39.809153 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.363994 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.863278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.363665 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.863948 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.364081 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.864124 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.363964 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.863593 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.363750 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.864002 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.102040 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.212769 1157263 api_server.go:72] duration metric: took 1.111626123s to wait for apiserver process to appear ...
	I0318 13:50:39.212807 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:39.212840 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:39.213446 1157263 api_server.go:269] stopped: https://192.168.50.191:8443/healthz: Get "https://192.168.50.191:8443/healthz": dial tcp 192.168.50.191:8443: connect: connection refused
	I0318 13:50:39.713482 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.646306 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.646352 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.646370 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.691920 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.691953 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.713082 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.770065 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:42.770101 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.213524 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.224669 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.224710 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.712987 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.718490 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.718533 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:44.213026 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:44.217876 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:50:44.225562 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:44.225588 1157263 api_server.go:131] duration metric: took 5.012774227s to wait for apiserver health ...
	I0318 13:50:44.225610 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:44.225618 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:44.227565 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:40.210029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:42.210435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:44.710674 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:41.811414 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.818645 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:46.308757 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.364189 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:43.863868 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.363454 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.863940 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.363913 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.863288 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.363884 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.863361 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.363383 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.864064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.229055 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:44.260389 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:44.310001 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:44.327281 1157263 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:44.327330 1157263 system_pods.go:61] "coredns-5dd5756b68-zsfvm" [1404c3fe-6538-4aaf-80f5-599275240731] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:44.327342 1157263 system_pods.go:61] "etcd-embed-certs-173036" [254a577c-bd3b-4645-9c92-1479b0c6d0c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:44.327354 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [5a738280-05ba-413e-a288-4c4d07ddbd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:44.327362 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [f48cfb7f-1efe-4941-b328-2358c7a5cced] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:44.327369 1157263 system_pods.go:61] "kube-proxy-xqf68" [969de4e5-fc60-4d46-b336-49f22a9b6c38] Running
	I0318 13:50:44.327376 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [e0579c16-de3e-4915-9ed2-f69b53f6f884] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:44.327385 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-5cv2z" [85649bfb-f91f-4bfe-9356-d540ac3d6a68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:44.327392 1157263 system_pods.go:61] "storage-provisioner" [0c1ec131-0f6c-4e01-aaec-5011f1a4fe75] Running
	I0318 13:50:44.327410 1157263 system_pods.go:74] duration metric: took 17.376754ms to wait for pod list to return data ...
	I0318 13:50:44.327423 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:44.332965 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:44.332997 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:44.333008 1157263 node_conditions.go:105] duration metric: took 5.580934ms to run NodePressure ...
	I0318 13:50:44.333027 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:44.573923 1157263 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578504 1157263 kubeadm.go:733] kubelet initialised
	I0318 13:50:44.578526 1157263 kubeadm.go:734] duration metric: took 4.577181ms waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578534 1157263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:44.584361 1157263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.591714 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591739 1157263 pod_ready.go:81] duration metric: took 7.35191ms for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.591746 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591753 1157263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.597618 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597641 1157263 pod_ready.go:81] duration metric: took 5.880276ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.597649 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597655 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.604124 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604148 1157263 pod_ready.go:81] duration metric: took 6.484251ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.604157 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604164 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:46.611326 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:47.209538 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:49.708718 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.309157 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.808340 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.363218 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:48.864086 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.363457 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.863292 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.363308 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.863428 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.363583 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.863562 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.363995 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.863463 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.111834 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.114329 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.114356 1157263 pod_ready.go:81] duration metric: took 5.510175425s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.114369 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133169 1157263 pod_ready.go:92] pod "kube-proxy-xqf68" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.133196 1157263 pod_ready.go:81] duration metric: took 18.819059ms for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133208 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:52.144639 1157263 pod_ready.go:102] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:51.709823 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:54.207738 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.311033 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:55.311439 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.363919 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:53.863936 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.363671 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.863567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:54.863709 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:54.911905 1157708 cri.go:89] found id: ""
	I0318 13:50:54.911942 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.911954 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:54.911962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:54.912031 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:54.962141 1157708 cri.go:89] found id: ""
	I0318 13:50:54.962170 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.962182 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:54.962188 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:54.962269 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:55.001597 1157708 cri.go:89] found id: ""
	I0318 13:50:55.001639 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.001652 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:55.001660 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:55.001725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:55.042660 1157708 cri.go:89] found id: ""
	I0318 13:50:55.042695 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.042708 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:55.042716 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:55.042775 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:55.082095 1157708 cri.go:89] found id: ""
	I0318 13:50:55.082128 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.082139 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:55.082146 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:55.082211 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:55.120938 1157708 cri.go:89] found id: ""
	I0318 13:50:55.120969 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.121000 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:55.121008 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:55.121081 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:55.159247 1157708 cri.go:89] found id: ""
	I0318 13:50:55.159280 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.159292 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:55.159300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:55.159366 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:55.200130 1157708 cri.go:89] found id: ""
	I0318 13:50:55.200161 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.200170 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:55.200180 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:55.200193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:55.254113 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:55.254154 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:55.268984 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:55.269027 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:55.402079 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:55.402106 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:55.402123 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:55.468627 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:55.468674 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:54.143220 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:54.143247 1157263 pod_ready.go:81] duration metric: took 4.010031997s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:54.143258 1157263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:56.151615 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.650293 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:56.208339 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.209144 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:57.810894 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.308972 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.016860 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:58.031684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:58.031747 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:58.073389 1157708 cri.go:89] found id: ""
	I0318 13:50:58.073415 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.073427 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:58.073434 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:58.073497 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:58.114439 1157708 cri.go:89] found id: ""
	I0318 13:50:58.114471 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.114483 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:58.114490 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:58.114553 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:58.165440 1157708 cri.go:89] found id: ""
	I0318 13:50:58.165466 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.165476 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:58.165484 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:58.165569 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:58.207083 1157708 cri.go:89] found id: ""
	I0318 13:50:58.207117 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.207129 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:58.207137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:58.207227 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:58.252945 1157708 cri.go:89] found id: ""
	I0318 13:50:58.252973 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.252985 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:58.252993 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:58.253055 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:58.292437 1157708 cri.go:89] found id: ""
	I0318 13:50:58.292464 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.292474 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:58.292480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:58.292530 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:58.335359 1157708 cri.go:89] found id: ""
	I0318 13:50:58.335403 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.335415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:58.335423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:58.335511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:58.381434 1157708 cri.go:89] found id: ""
	I0318 13:50:58.381473 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.381484 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:58.381494 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:58.381511 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:58.432270 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:58.432319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:58.447658 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:58.447686 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:58.523163 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:58.523186 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:58.523207 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:58.599544 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:58.599586 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:01.141653 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:01.156996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:01.157070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:01.192720 1157708 cri.go:89] found id: ""
	I0318 13:51:01.192762 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.192775 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:01.192785 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:01.192866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:01.232678 1157708 cri.go:89] found id: ""
	I0318 13:51:01.232705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.232716 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:01.232723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:01.232795 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:01.270637 1157708 cri.go:89] found id: ""
	I0318 13:51:01.270666 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.270676 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:01.270684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:01.270746 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:01.308891 1157708 cri.go:89] found id: ""
	I0318 13:51:01.308921 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.308931 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:01.308939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:01.309003 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:01.349301 1157708 cri.go:89] found id: ""
	I0318 13:51:01.349334 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.349346 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:01.349354 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:01.349420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:01.394010 1157708 cri.go:89] found id: ""
	I0318 13:51:01.394039 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.394047 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:01.394053 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:01.394103 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:01.432778 1157708 cri.go:89] found id: ""
	I0318 13:51:01.432804 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.432815 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:01.432823 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:01.432886 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:01.471974 1157708 cri.go:89] found id: ""
	I0318 13:51:01.472002 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.472011 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:01.472022 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:01.472040 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:01.524855 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:01.524893 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:01.540939 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:01.540967 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:01.618318 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:01.618350 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:01.618367 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:01.695717 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:01.695755 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:00.650906 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.651512 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.211620 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.708336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.312320 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.808301 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.241781 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:04.256276 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:04.256373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:04.297129 1157708 cri.go:89] found id: ""
	I0318 13:51:04.297158 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.297170 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:04.297179 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:04.297247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:04.341743 1157708 cri.go:89] found id: ""
	I0318 13:51:04.341774 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.341786 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:04.341793 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:04.341858 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:04.384400 1157708 cri.go:89] found id: ""
	I0318 13:51:04.384434 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.384445 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:04.384453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:04.384510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:04.425459 1157708 cri.go:89] found id: ""
	I0318 13:51:04.425487 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.425500 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:04.425510 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:04.425563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:04.463091 1157708 cri.go:89] found id: ""
	I0318 13:51:04.463125 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.463137 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:04.463145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:04.463210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:04.503023 1157708 cri.go:89] found id: ""
	I0318 13:51:04.503057 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.503069 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:04.503077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:04.503141 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:04.542083 1157708 cri.go:89] found id: ""
	I0318 13:51:04.542116 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.542127 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:04.542136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:04.542207 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:04.583097 1157708 cri.go:89] found id: ""
	I0318 13:51:04.583128 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.583137 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:04.583146 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:04.583161 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:04.650476 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:04.650518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:04.706073 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:04.706111 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:04.723595 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:04.723628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:04.800278 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:04.800301 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:04.800316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:07.388144 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:07.403636 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:07.403711 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:07.443337 1157708 cri.go:89] found id: ""
	I0318 13:51:07.443365 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.443379 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:07.443386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:07.443442 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:07.482417 1157708 cri.go:89] found id: ""
	I0318 13:51:07.482453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.482462 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:07.482469 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:07.482521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:07.518445 1157708 cri.go:89] found id: ""
	I0318 13:51:07.518474 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.518485 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:07.518493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:07.518563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:07.555628 1157708 cri.go:89] found id: ""
	I0318 13:51:07.555661 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.555673 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:07.555681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:07.555760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:07.593805 1157708 cri.go:89] found id: ""
	I0318 13:51:07.593842 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.593856 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:07.593873 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:07.593936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:07.638206 1157708 cri.go:89] found id: ""
	I0318 13:51:07.638234 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.638242 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:07.638249 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:07.638313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:07.679526 1157708 cri.go:89] found id: ""
	I0318 13:51:07.679561 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.679573 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:07.679581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:07.679635 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:07.724468 1157708 cri.go:89] found id: ""
	I0318 13:51:07.724494 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.724504 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:07.724516 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:07.724533 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:07.766491 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:07.766522 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:07.823782 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:07.823833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:07.839316 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:07.839342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:07.924790 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:07.924821 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:07.924841 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:05.151629 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.651485 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:05.210455 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.709381 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.310000 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:09.808337 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.513618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:10.528711 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:10.528790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:10.571217 1157708 cri.go:89] found id: ""
	I0318 13:51:10.571254 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.571267 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:10.571275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:10.571335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:10.608096 1157708 cri.go:89] found id: ""
	I0318 13:51:10.608129 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.608140 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:10.608149 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:10.608217 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:10.649245 1157708 cri.go:89] found id: ""
	I0318 13:51:10.649274 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.649283 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:10.649290 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:10.649365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:10.693462 1157708 cri.go:89] found id: ""
	I0318 13:51:10.693495 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.693506 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:10.693515 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:10.693589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:10.740434 1157708 cri.go:89] found id: ""
	I0318 13:51:10.740464 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.740474 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:10.740480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:10.740543 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:10.781062 1157708 cri.go:89] found id: ""
	I0318 13:51:10.781099 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.781108 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:10.781114 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:10.781167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:10.828480 1157708 cri.go:89] found id: ""
	I0318 13:51:10.828513 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.828524 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:10.828532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:10.828605 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:10.868508 1157708 cri.go:89] found id: ""
	I0318 13:51:10.868535 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.868543 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:10.868553 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:10.868565 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:10.923925 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:10.923961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:10.939254 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:10.939283 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:11.031307 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:11.031334 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:11.031351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:11.121563 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:11.121618 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:10.151278 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.650083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.209877 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.709070 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.308084 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:14.309651 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:16.312985 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:13.681147 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:13.696705 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:13.696812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:13.740904 1157708 cri.go:89] found id: ""
	I0318 13:51:13.740937 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.740949 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:13.740957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:13.741038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:13.779625 1157708 cri.go:89] found id: ""
	I0318 13:51:13.779659 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.779672 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:13.779681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:13.779762 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:13.822183 1157708 cri.go:89] found id: ""
	I0318 13:51:13.822218 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.822231 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:13.822239 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:13.822302 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:13.873686 1157708 cri.go:89] found id: ""
	I0318 13:51:13.873728 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.873741 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:13.873749 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:13.873821 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:13.919772 1157708 cri.go:89] found id: ""
	I0318 13:51:13.919802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.919811 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:13.919817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:13.919874 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:13.958809 1157708 cri.go:89] found id: ""
	I0318 13:51:13.958837 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.958846 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:13.958852 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:13.958928 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:14.000537 1157708 cri.go:89] found id: ""
	I0318 13:51:14.000568 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.000580 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:14.000588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:14.000638 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:14.041234 1157708 cri.go:89] found id: ""
	I0318 13:51:14.041265 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.041275 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:14.041285 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:14.041299 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:14.085435 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:14.085462 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:14.144336 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:14.144374 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:14.159972 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:14.160000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:14.242027 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:14.242048 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:14.242061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:16.821805 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:16.840202 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:16.840272 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:16.898088 1157708 cri.go:89] found id: ""
	I0318 13:51:16.898120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.898129 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:16.898135 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:16.898203 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:16.953180 1157708 cri.go:89] found id: ""
	I0318 13:51:16.953209 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.953221 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:16.953229 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:16.953288 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:17.006995 1157708 cri.go:89] found id: ""
	I0318 13:51:17.007048 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.007062 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:17.007070 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:17.007136 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:17.049756 1157708 cri.go:89] found id: ""
	I0318 13:51:17.049798 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.049809 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:17.049817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:17.049885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:17.092026 1157708 cri.go:89] found id: ""
	I0318 13:51:17.092055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.092066 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:17.092074 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:17.092144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:17.137722 1157708 cri.go:89] found id: ""
	I0318 13:51:17.137756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.137769 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:17.137778 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:17.137875 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:17.180778 1157708 cri.go:89] found id: ""
	I0318 13:51:17.180808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.180816 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:17.180822 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:17.180885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:17.227629 1157708 cri.go:89] found id: ""
	I0318 13:51:17.227664 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.227675 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:17.227688 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:17.227706 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:17.272559 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:17.272588 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:17.333953 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:17.333994 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:17.349765 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:17.349793 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:17.434436 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:17.434465 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:17.434483 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:14.650201 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.151069 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:15.208570 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.210168 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:19.707753 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:18.808252 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.309389 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:20.014314 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:20.031106 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:20.031172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:20.067727 1157708 cri.go:89] found id: ""
	I0318 13:51:20.067753 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.067765 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:20.067773 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:20.067844 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:20.108455 1157708 cri.go:89] found id: ""
	I0318 13:51:20.108482 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.108491 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:20.108497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:20.108563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:20.152257 1157708 cri.go:89] found id: ""
	I0318 13:51:20.152285 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.152310 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:20.152317 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:20.152394 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:20.191480 1157708 cri.go:89] found id: ""
	I0318 13:51:20.191509 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.191520 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:20.191529 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:20.191599 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:20.235677 1157708 cri.go:89] found id: ""
	I0318 13:51:20.235705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.235716 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:20.235723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:20.235796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:20.274794 1157708 cri.go:89] found id: ""
	I0318 13:51:20.274822 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.274833 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:20.274842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:20.274907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:20.321987 1157708 cri.go:89] found id: ""
	I0318 13:51:20.322019 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.322031 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:20.322040 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:20.322097 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:20.361292 1157708 cri.go:89] found id: ""
	I0318 13:51:20.361319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.361328 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:20.361338 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:20.361360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:20.434481 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:20.434509 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:20.434527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:20.518203 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:20.518244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:20.560241 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:20.560271 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:20.615489 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:20.615526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:19.151244 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.151320 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.651849 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.708423 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:24.207976 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.310491 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:25.808443 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.132509 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:23.146447 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:23.146559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:23.189576 1157708 cri.go:89] found id: ""
	I0318 13:51:23.189613 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.189625 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:23.189634 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:23.189688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:23.229700 1157708 cri.go:89] found id: ""
	I0318 13:51:23.229731 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.229740 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:23.229747 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:23.229812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:23.272713 1157708 cri.go:89] found id: ""
	I0318 13:51:23.272747 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.272759 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:23.272768 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:23.272834 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:23.313988 1157708 cri.go:89] found id: ""
	I0318 13:51:23.314014 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.314022 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:23.314028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:23.314087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:23.360195 1157708 cri.go:89] found id: ""
	I0318 13:51:23.360230 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.360243 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:23.360251 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:23.360321 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:23.400657 1157708 cri.go:89] found id: ""
	I0318 13:51:23.400685 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.400694 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:23.400707 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:23.400760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:23.442841 1157708 cri.go:89] found id: ""
	I0318 13:51:23.442873 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.442893 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:23.442900 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:23.442970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:23.483467 1157708 cri.go:89] found id: ""
	I0318 13:51:23.483504 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.483516 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:23.483528 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:23.483545 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:23.538581 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:23.538616 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:23.555392 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:23.555421 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:23.634919 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:23.634945 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:23.634970 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:23.718098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:23.718144 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.270369 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:26.287165 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:26.287232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:26.331773 1157708 cri.go:89] found id: ""
	I0318 13:51:26.331807 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.331832 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:26.331850 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:26.331923 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:26.372067 1157708 cri.go:89] found id: ""
	I0318 13:51:26.372095 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.372102 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:26.372109 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:26.372182 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:26.411883 1157708 cri.go:89] found id: ""
	I0318 13:51:26.411910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.411919 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:26.411924 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:26.411980 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:26.449087 1157708 cri.go:89] found id: ""
	I0318 13:51:26.449122 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.449131 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:26.449137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:26.449188 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:26.492126 1157708 cri.go:89] found id: ""
	I0318 13:51:26.492162 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.492174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:26.492182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:26.492251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:26.529621 1157708 cri.go:89] found id: ""
	I0318 13:51:26.529656 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.529668 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:26.529677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:26.529764 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:26.568853 1157708 cri.go:89] found id: ""
	I0318 13:51:26.568888 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.568899 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:26.568907 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:26.568979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:26.607882 1157708 cri.go:89] found id: ""
	I0318 13:51:26.607917 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.607929 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:26.607942 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:26.607959 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.648736 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:26.648768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:26.704641 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:26.704684 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:26.720681 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:26.720715 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:26.799577 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:26.799608 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:26.799627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:26.152083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.651445 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:26.208160 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.708468 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.309859 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.806690 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:29.389391 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:29.404122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:29.404195 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:29.446761 1157708 cri.go:89] found id: ""
	I0318 13:51:29.446787 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.446796 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:29.446803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:29.446857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:29.483974 1157708 cri.go:89] found id: ""
	I0318 13:51:29.484007 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.484020 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:29.484028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:29.484099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:29.521894 1157708 cri.go:89] found id: ""
	I0318 13:51:29.521922 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.521931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:29.521937 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:29.521993 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:29.562918 1157708 cri.go:89] found id: ""
	I0318 13:51:29.562948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.562957 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:29.562963 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:29.563017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:29.600372 1157708 cri.go:89] found id: ""
	I0318 13:51:29.600412 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.600424 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:29.600432 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:29.600500 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:29.638902 1157708 cri.go:89] found id: ""
	I0318 13:51:29.638933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.638945 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:29.638953 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:29.639019 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:29.679041 1157708 cri.go:89] found id: ""
	I0318 13:51:29.679071 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.679079 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:29.679085 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:29.679142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:29.719168 1157708 cri.go:89] found id: ""
	I0318 13:51:29.719201 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.719213 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:29.719224 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:29.719244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:29.764050 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:29.764077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:29.822136 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:29.822174 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:29.839485 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:29.839515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:29.914984 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:29.915006 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:29.915023 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:32.497388 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:32.512151 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:32.512215 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:32.549566 1157708 cri.go:89] found id: ""
	I0318 13:51:32.549602 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.549614 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:32.549623 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:32.549693 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:32.588516 1157708 cri.go:89] found id: ""
	I0318 13:51:32.588546 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.588555 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:32.588562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:32.588615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:32.628425 1157708 cri.go:89] found id: ""
	I0318 13:51:32.628453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.628462 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:32.628470 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:32.628546 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:32.670851 1157708 cri.go:89] found id: ""
	I0318 13:51:32.670874 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.670888 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:32.670895 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:32.670944 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:32.709614 1157708 cri.go:89] found id: ""
	I0318 13:51:32.709642 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.709656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:32.709666 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:32.709738 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:32.749774 1157708 cri.go:89] found id: ""
	I0318 13:51:32.749808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.749819 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:32.749828 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:32.749896 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:32.789502 1157708 cri.go:89] found id: ""
	I0318 13:51:32.789525 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.789534 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:32.789540 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:32.789589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:32.834926 1157708 cri.go:89] found id: ""
	I0318 13:51:32.834948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.834956 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:32.834965 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:32.834980 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:32.887365 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:32.887404 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:32.903584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:32.903610 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:32.978924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:32.978958 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:32.978988 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:31.151276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.651395 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.709136 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.709549 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.808076 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.308827 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.055386 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:33.055424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:35.603881 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:35.618083 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:35.618167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:35.659760 1157708 cri.go:89] found id: ""
	I0318 13:51:35.659802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.659814 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:35.659820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:35.659881 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:35.703521 1157708 cri.go:89] found id: ""
	I0318 13:51:35.703570 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.703582 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:35.703589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:35.703651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:35.744411 1157708 cri.go:89] found id: ""
	I0318 13:51:35.744444 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.744455 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:35.744463 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:35.744548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:35.783704 1157708 cri.go:89] found id: ""
	I0318 13:51:35.783735 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.783746 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:35.783754 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:35.783819 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:35.824000 1157708 cri.go:89] found id: ""
	I0318 13:51:35.824031 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.824042 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:35.824049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:35.824117 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:35.860260 1157708 cri.go:89] found id: ""
	I0318 13:51:35.860289 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.860299 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:35.860308 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:35.860388 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:35.895154 1157708 cri.go:89] found id: ""
	I0318 13:51:35.895189 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.895201 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:35.895209 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:35.895276 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:35.936916 1157708 cri.go:89] found id: ""
	I0318 13:51:35.936942 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.936951 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:35.936961 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:35.936977 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:35.951715 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:35.951745 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:36.027431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:36.027457 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:36.027474 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:36.113339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:36.113386 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:36.160132 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:36.160170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:36.151331 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.650891 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.208500 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.209692 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.709776 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.807423 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.809226 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.711710 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:38.726104 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:38.726162 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:38.763251 1157708 cri.go:89] found id: ""
	I0318 13:51:38.763281 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.763291 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:38.763300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:38.763364 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:38.802521 1157708 cri.go:89] found id: ""
	I0318 13:51:38.802548 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.802556 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:38.802562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:38.802616 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:38.843778 1157708 cri.go:89] found id: ""
	I0318 13:51:38.843817 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.843831 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:38.843839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:38.843909 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:38.884966 1157708 cri.go:89] found id: ""
	I0318 13:51:38.885003 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.885015 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:38.885024 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:38.885090 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:38.925653 1157708 cri.go:89] found id: ""
	I0318 13:51:38.925681 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.925690 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:38.925696 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:38.925757 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:38.964126 1157708 cri.go:89] found id: ""
	I0318 13:51:38.964156 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.964169 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:38.964177 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:38.964228 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:39.004864 1157708 cri.go:89] found id: ""
	I0318 13:51:39.004898 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.004910 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:39.004919 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:39.004991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:39.041555 1157708 cri.go:89] found id: ""
	I0318 13:51:39.041588 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.041600 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:39.041611 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:39.041626 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:39.092984 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:39.093019 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:39.110492 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:39.110526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:39.186785 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:39.186848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:39.186872 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:39.272847 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:39.272891 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:41.829404 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:41.843407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:41.843479 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:41.883129 1157708 cri.go:89] found id: ""
	I0318 13:51:41.883164 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.883175 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:41.883184 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:41.883246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:41.924083 1157708 cri.go:89] found id: ""
	I0318 13:51:41.924123 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.924136 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:41.924144 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:41.924209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:41.963029 1157708 cri.go:89] found id: ""
	I0318 13:51:41.963058 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.963069 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:41.963084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:41.963155 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:42.003393 1157708 cri.go:89] found id: ""
	I0318 13:51:42.003430 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.003442 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:42.003450 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:42.003511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:42.041938 1157708 cri.go:89] found id: ""
	I0318 13:51:42.041968 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.041977 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:42.041983 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:42.042044 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:42.079685 1157708 cri.go:89] found id: ""
	I0318 13:51:42.079718 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.079731 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:42.079740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:42.079805 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:42.118112 1157708 cri.go:89] found id: ""
	I0318 13:51:42.118144 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.118156 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:42.118164 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:42.118230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:42.157287 1157708 cri.go:89] found id: ""
	I0318 13:51:42.157319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.157331 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:42.157343 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:42.157360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:42.213006 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:42.213038 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:42.228452 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:42.228481 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:42.302523 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:42.302545 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:42.302558 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:42.387994 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:42.388062 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:40.651272 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:43.151009 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.208825 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.211676 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.310765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.313778 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.934501 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:44.949163 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:44.949245 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:44.991885 1157708 cri.go:89] found id: ""
	I0318 13:51:44.991914 1157708 logs.go:276] 0 containers: []
	W0318 13:51:44.991924 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:44.991931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:44.992008 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:45.029868 1157708 cri.go:89] found id: ""
	I0318 13:51:45.029904 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.029915 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:45.029922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:45.030017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:45.067755 1157708 cri.go:89] found id: ""
	I0318 13:51:45.067785 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.067794 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:45.067803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:45.067857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:45.106296 1157708 cri.go:89] found id: ""
	I0318 13:51:45.106323 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.106333 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:45.106339 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:45.106405 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:45.145746 1157708 cri.go:89] found id: ""
	I0318 13:51:45.145784 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.145797 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:45.145805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:45.145868 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:45.191960 1157708 cri.go:89] found id: ""
	I0318 13:51:45.191998 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.192010 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:45.192019 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:45.192089 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:45.231436 1157708 cri.go:89] found id: ""
	I0318 13:51:45.231470 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.231483 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:45.231491 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:45.231559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:45.274521 1157708 cri.go:89] found id: ""
	I0318 13:51:45.274554 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.274565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:45.274577 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:45.274595 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:45.338539 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:45.338580 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:45.353917 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:45.353947 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:45.447734 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:45.447755 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:45.447768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:45.530098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:45.530140 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:45.653161 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.150841 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.708808 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.209076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.808315 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.311406 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.077992 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:48.092203 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:48.092273 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:48.133136 1157708 cri.go:89] found id: ""
	I0318 13:51:48.133172 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.133183 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:48.133191 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:48.133259 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:48.177727 1157708 cri.go:89] found id: ""
	I0318 13:51:48.177756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.177768 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:48.177775 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:48.177843 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:48.217574 1157708 cri.go:89] found id: ""
	I0318 13:51:48.217600 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.217608 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:48.217614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:48.217676 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:48.258900 1157708 cri.go:89] found id: ""
	I0318 13:51:48.258933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.258947 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:48.258955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:48.259046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:48.299527 1157708 cri.go:89] found id: ""
	I0318 13:51:48.299562 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.299573 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:48.299581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:48.299650 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:48.339692 1157708 cri.go:89] found id: ""
	I0318 13:51:48.339723 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.339732 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:48.339740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:48.339791 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:48.378737 1157708 cri.go:89] found id: ""
	I0318 13:51:48.378764 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.378773 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:48.378779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:48.378841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:48.414593 1157708 cri.go:89] found id: ""
	I0318 13:51:48.414621 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.414629 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:48.414639 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:48.414654 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:48.430232 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:48.430264 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:48.513313 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:48.513335 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:48.513353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:48.594681 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:48.594721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:48.638681 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:48.638720 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.189510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:51.204296 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:51.204383 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:51.248285 1157708 cri.go:89] found id: ""
	I0318 13:51:51.248311 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.248331 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:51.248340 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:51.248414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:51.289022 1157708 cri.go:89] found id: ""
	I0318 13:51:51.289055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.289068 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:51.289077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:51.289144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:51.329367 1157708 cri.go:89] found id: ""
	I0318 13:51:51.329405 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.329414 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:51.329420 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:51.329477 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:51.370909 1157708 cri.go:89] found id: ""
	I0318 13:51:51.370948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.370960 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:51.370970 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:51.371043 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:51.419447 1157708 cri.go:89] found id: ""
	I0318 13:51:51.419486 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.419498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:51.419506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:51.419573 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:51.466302 1157708 cri.go:89] found id: ""
	I0318 13:51:51.466336 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.466348 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:51.466356 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:51.466441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:51.505593 1157708 cri.go:89] found id: ""
	I0318 13:51:51.505631 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.505644 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:51.505652 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:51.505724 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:51.543815 1157708 cri.go:89] found id: ""
	I0318 13:51:51.543843 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.543852 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:51.543863 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:51.543885 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.596271 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:51.596305 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:51.612441 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:51.612477 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:51.690591 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:51.690614 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:51.690631 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:51.771781 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:51.771821 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:50.650088 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:52.650307 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.710583 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.208629 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.808743 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.309915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.319626 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:54.334041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:54.334113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:54.372090 1157708 cri.go:89] found id: ""
	I0318 13:51:54.372120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.372132 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:54.372139 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:54.372196 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:54.412513 1157708 cri.go:89] found id: ""
	I0318 13:51:54.412567 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.412580 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:54.412588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:54.412662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:54.453143 1157708 cri.go:89] found id: ""
	I0318 13:51:54.453176 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.453188 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:54.453196 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:54.453262 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:54.497908 1157708 cri.go:89] found id: ""
	I0318 13:51:54.497940 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.497949 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:54.497957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:54.498025 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:54.539044 1157708 cri.go:89] found id: ""
	I0318 13:51:54.539072 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.539081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:54.539086 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:54.539151 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:54.578916 1157708 cri.go:89] found id: ""
	I0318 13:51:54.578944 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.578951 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:54.578958 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:54.579027 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:54.617339 1157708 cri.go:89] found id: ""
	I0318 13:51:54.617366 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.617375 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:54.617380 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:54.617436 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:54.661288 1157708 cri.go:89] found id: ""
	I0318 13:51:54.661309 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.661318 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:54.661328 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:54.661344 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:54.740710 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:54.740751 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:54.789136 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:54.789176 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.844585 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:54.844627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:54.860304 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:54.860351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:54.945305 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:57.445800 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:57.459294 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:57.459368 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:57.497411 1157708 cri.go:89] found id: ""
	I0318 13:51:57.497441 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.497449 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:57.497456 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:57.497521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:57.535629 1157708 cri.go:89] found id: ""
	I0318 13:51:57.535663 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.535675 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:57.535684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:57.535749 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:57.572980 1157708 cri.go:89] found id: ""
	I0318 13:51:57.573008 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.573017 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:57.573023 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:57.573071 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:57.622949 1157708 cri.go:89] found id: ""
	I0318 13:51:57.622984 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.622997 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:57.623005 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:57.623070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:57.659877 1157708 cri.go:89] found id: ""
	I0318 13:51:57.659910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.659921 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:57.659928 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:57.659991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:57.705399 1157708 cri.go:89] found id: ""
	I0318 13:51:57.705481 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.705495 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:57.705504 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:57.705566 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:57.748035 1157708 cri.go:89] found id: ""
	I0318 13:51:57.748062 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.748073 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:57.748084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:57.748144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:57.801942 1157708 cri.go:89] found id: ""
	I0318 13:51:57.801976 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.801987 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:57.801999 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:57.802017 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:57.900157 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:57.900204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:57.946179 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:57.946219 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.651363 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:57.151268 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.208925 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.708089 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.807605 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.808479 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.307740 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.000369 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:58.000412 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:58.016179 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:58.016211 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:58.101766 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:00.602151 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:00.617466 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:00.617531 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:00.661294 1157708 cri.go:89] found id: ""
	I0318 13:52:00.661328 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.661336 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:00.661342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:00.661400 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:00.706227 1157708 cri.go:89] found id: ""
	I0318 13:52:00.706257 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.706267 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:00.706275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:00.706342 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:00.746482 1157708 cri.go:89] found id: ""
	I0318 13:52:00.746515 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.746528 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:00.746536 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:00.746600 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:00.789242 1157708 cri.go:89] found id: ""
	I0318 13:52:00.789272 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.789281 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:00.789287 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:00.789348 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:00.832463 1157708 cri.go:89] found id: ""
	I0318 13:52:00.832503 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.832514 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:00.832522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:00.832581 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:00.869790 1157708 cri.go:89] found id: ""
	I0318 13:52:00.869819 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.869830 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:00.869839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:00.869904 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:00.909656 1157708 cri.go:89] found id: ""
	I0318 13:52:00.909685 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.909693 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:00.909700 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:00.909754 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:00.953818 1157708 cri.go:89] found id: ""
	I0318 13:52:00.953856 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.953868 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:00.953882 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:00.953898 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:01.032822 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:01.032848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:01.032865 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:01.111701 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:01.111747 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:01.168270 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:01.168300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:01.220376 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:01.220408 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:59.650359 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.650627 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.651830 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:00.709561 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.207829 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.808915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:06.307915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.737354 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:03.756282 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:03.756382 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:03.804716 1157708 cri.go:89] found id: ""
	I0318 13:52:03.804757 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.804768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:03.804777 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:03.804838 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:03.864559 1157708 cri.go:89] found id: ""
	I0318 13:52:03.864596 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.864609 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:03.864617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:03.864687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:03.918397 1157708 cri.go:89] found id: ""
	I0318 13:52:03.918425 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.918433 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:03.918439 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:03.918504 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:03.961729 1157708 cri.go:89] found id: ""
	I0318 13:52:03.961762 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.961773 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:03.961780 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:03.961856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:04.006261 1157708 cri.go:89] found id: ""
	I0318 13:52:04.006299 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.006311 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:04.006319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:04.006404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:04.050284 1157708 cri.go:89] found id: ""
	I0318 13:52:04.050313 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.050321 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:04.050327 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:04.050384 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:04.093789 1157708 cri.go:89] found id: ""
	I0318 13:52:04.093827 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.093839 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:04.093847 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:04.093916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:04.135047 1157708 cri.go:89] found id: ""
	I0318 13:52:04.135091 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.135110 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:04.135124 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:04.135142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:04.192899 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:04.192937 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:04.209080 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:04.209130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:04.286388 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:04.286413 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:04.286428 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:04.371836 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:04.371877 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:06.923039 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:06.938743 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:06.938826 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:06.984600 1157708 cri.go:89] found id: ""
	I0318 13:52:06.984634 1157708 logs.go:276] 0 containers: []
	W0318 13:52:06.984646 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:06.984655 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:06.984721 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:07.023849 1157708 cri.go:89] found id: ""
	I0318 13:52:07.023891 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.023914 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:07.023922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:07.023984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:07.071972 1157708 cri.go:89] found id: ""
	I0318 13:52:07.072002 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.072015 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:07.072022 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:07.072087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:07.109070 1157708 cri.go:89] found id: ""
	I0318 13:52:07.109105 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.109118 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:07.109126 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:07.109183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:07.149879 1157708 cri.go:89] found id: ""
	I0318 13:52:07.149910 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.149918 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:07.149925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:07.149990 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:07.195946 1157708 cri.go:89] found id: ""
	I0318 13:52:07.195976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.195987 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:07.195995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:07.196062 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:07.238126 1157708 cri.go:89] found id: ""
	I0318 13:52:07.238152 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.238162 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:07.238168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:07.238233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:07.278218 1157708 cri.go:89] found id: ""
	I0318 13:52:07.278255 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.278268 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:07.278282 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:07.278300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:07.294926 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:07.294955 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:07.383431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:07.383455 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:07.383468 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:07.467306 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:07.467348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:07.515996 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:07.516028 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:06.151546 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.162392 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:05.208765 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:07.210243 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:09.708076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.309045 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.807773 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.071945 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:10.088587 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:10.088654 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:10.130528 1157708 cri.go:89] found id: ""
	I0318 13:52:10.130566 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.130579 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:10.130588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:10.130663 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:10.173113 1157708 cri.go:89] found id: ""
	I0318 13:52:10.173150 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.173168 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:10.173178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:10.173243 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:10.218941 1157708 cri.go:89] found id: ""
	I0318 13:52:10.218976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.218987 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:10.218996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:10.219068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:10.262331 1157708 cri.go:89] found id: ""
	I0318 13:52:10.262368 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.262381 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:10.262389 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:10.262460 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:10.303329 1157708 cri.go:89] found id: ""
	I0318 13:52:10.303363 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.303378 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:10.303386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:10.303457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:10.344458 1157708 cri.go:89] found id: ""
	I0318 13:52:10.344486 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.344497 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:10.344505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:10.344567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:10.386753 1157708 cri.go:89] found id: ""
	I0318 13:52:10.386786 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.386797 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:10.386806 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:10.386876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:10.425922 1157708 cri.go:89] found id: ""
	I0318 13:52:10.425954 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.425965 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:10.425978 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:10.426000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:10.441134 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:10.441168 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:10.514865 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:10.514899 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:10.514916 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:10.592061 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:10.592105 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:10.642900 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:10.642935 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:10.651432 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.150537 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.208498 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:14.209684 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.808250 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:15.308639 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.199176 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:13.215155 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:13.215232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:13.256107 1157708 cri.go:89] found id: ""
	I0318 13:52:13.256139 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.256151 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:13.256160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:13.256231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:13.296562 1157708 cri.go:89] found id: ""
	I0318 13:52:13.296597 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.296608 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:13.296615 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:13.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:13.336633 1157708 cri.go:89] found id: ""
	I0318 13:52:13.336662 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.336672 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:13.336678 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:13.336737 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:13.382597 1157708 cri.go:89] found id: ""
	I0318 13:52:13.382639 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.382654 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:13.382663 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:13.382733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:13.430257 1157708 cri.go:89] found id: ""
	I0318 13:52:13.430292 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.430304 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:13.430312 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:13.430373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:13.466854 1157708 cri.go:89] found id: ""
	I0318 13:52:13.466881 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.466889 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:13.466896 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:13.466945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:13.510297 1157708 cri.go:89] found id: ""
	I0318 13:52:13.510333 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.510344 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:13.510352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:13.510420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:13.551476 1157708 cri.go:89] found id: ""
	I0318 13:52:13.551508 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.551517 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:13.551528 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:13.551542 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:13.634561 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:13.634585 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:13.634598 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:13.720088 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:13.720129 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:13.760621 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:13.760659 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:13.817311 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:13.817350 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.334094 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:16.349779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:16.349866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:16.394131 1157708 cri.go:89] found id: ""
	I0318 13:52:16.394157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.394167 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:16.394175 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:16.394239 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:16.438185 1157708 cri.go:89] found id: ""
	I0318 13:52:16.438232 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.438245 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:16.438264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:16.438335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:16.476872 1157708 cri.go:89] found id: ""
	I0318 13:52:16.476920 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.476932 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:16.476939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:16.477007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:16.518226 1157708 cri.go:89] found id: ""
	I0318 13:52:16.518253 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.518262 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:16.518269 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:16.518327 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:16.559119 1157708 cri.go:89] found id: ""
	I0318 13:52:16.559160 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.559174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:16.559182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:16.559260 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:16.600050 1157708 cri.go:89] found id: ""
	I0318 13:52:16.600079 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.600088 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:16.600094 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:16.600160 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:16.640621 1157708 cri.go:89] found id: ""
	I0318 13:52:16.640649 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.640660 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:16.640668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:16.640733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:16.680541 1157708 cri.go:89] found id: ""
	I0318 13:52:16.680571 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.680580 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:16.680590 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:16.680602 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:16.766378 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:16.766415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:16.811846 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:16.811883 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:16.871940 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:16.871981 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.887494 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:16.887521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:16.961924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:15.650599 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.650902 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:16.710336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.207426 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.807338 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.809418 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.462316 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:19.478819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:19.478885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:19.523280 1157708 cri.go:89] found id: ""
	I0318 13:52:19.523314 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.523334 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:19.523342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:19.523417 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:19.560675 1157708 cri.go:89] found id: ""
	I0318 13:52:19.560708 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.560717 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:19.560725 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:19.560790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:19.598739 1157708 cri.go:89] found id: ""
	I0318 13:52:19.598766 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.598773 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:19.598781 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:19.598846 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:19.639928 1157708 cri.go:89] found id: ""
	I0318 13:52:19.639960 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.639969 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:19.639975 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:19.640030 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:19.686084 1157708 cri.go:89] found id: ""
	I0318 13:52:19.686134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.686153 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:19.686160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:19.686231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:19.725449 1157708 cri.go:89] found id: ""
	I0318 13:52:19.725481 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.725491 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:19.725497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:19.725559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:19.763855 1157708 cri.go:89] found id: ""
	I0318 13:52:19.763886 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.763897 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:19.763905 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:19.763976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:19.805783 1157708 cri.go:89] found id: ""
	I0318 13:52:19.805813 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.805824 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:19.805836 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:19.805852 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.883873 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:19.883914 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:19.926368 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:19.926406 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:19.981137 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:19.981181 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:19.996242 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:19.996269 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:20.077880 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:22.578045 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:22.594170 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:22.594247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:22.637241 1157708 cri.go:89] found id: ""
	I0318 13:52:22.637276 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.637289 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:22.637298 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:22.637363 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:22.679877 1157708 cri.go:89] found id: ""
	I0318 13:52:22.679904 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.679912 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:22.679918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:22.679981 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:22.721865 1157708 cri.go:89] found id: ""
	I0318 13:52:22.721890 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.721903 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:22.721912 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:22.721982 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:22.763208 1157708 cri.go:89] found id: ""
	I0318 13:52:22.763242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.763255 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:22.763264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:22.763329 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:22.802038 1157708 cri.go:89] found id: ""
	I0318 13:52:22.802071 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.802081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:22.802089 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:22.802170 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:22.841206 1157708 cri.go:89] found id: ""
	I0318 13:52:22.841242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.841254 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:22.841263 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:22.841328 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:22.885159 1157708 cri.go:89] found id: ""
	I0318 13:52:22.885197 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.885209 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:22.885218 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:22.885289 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:22.925346 1157708 cri.go:89] found id: ""
	I0318 13:52:22.925373 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.925382 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:22.925391 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:22.925407 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.654611 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.152365 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:21.208979 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.210660 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.308290 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:24.310006 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.006158 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:23.006193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:23.053932 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:23.053961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:23.107728 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:23.107768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:23.125708 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:23.125740 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:23.202609 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:25.703096 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:25.718617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:25.718689 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:25.756504 1157708 cri.go:89] found id: ""
	I0318 13:52:25.756530 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.756538 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:25.756544 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:25.756608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:25.795103 1157708 cri.go:89] found id: ""
	I0318 13:52:25.795140 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.795152 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:25.795160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:25.795240 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:25.839908 1157708 cri.go:89] found id: ""
	I0318 13:52:25.839945 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.839957 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:25.839971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:25.840038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:25.881677 1157708 cri.go:89] found id: ""
	I0318 13:52:25.881711 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.881723 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:25.881732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:25.881802 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:25.923356 1157708 cri.go:89] found id: ""
	I0318 13:52:25.923386 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.923397 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:25.923410 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:25.923469 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:25.961661 1157708 cri.go:89] found id: ""
	I0318 13:52:25.961693 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.961705 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:25.961713 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:25.961785 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:26.003198 1157708 cri.go:89] found id: ""
	I0318 13:52:26.003236 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.003248 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:26.003256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:26.003319 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:26.041436 1157708 cri.go:89] found id: ""
	I0318 13:52:26.041471 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.041483 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:26.041496 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:26.041515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:26.056679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:26.056716 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:26.143900 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:26.143926 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:26.143946 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:26.226929 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:26.226964 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:26.288519 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:26.288560 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:24.652661 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.152317 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:25.708488 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.708931 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:26.807624 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.809030 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.308980 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.846205 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:28.861117 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:28.861190 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:28.906990 1157708 cri.go:89] found id: ""
	I0318 13:52:28.907022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.907030 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:28.907036 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:28.907099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:28.946271 1157708 cri.go:89] found id: ""
	I0318 13:52:28.946309 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.946322 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:28.946332 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:28.946403 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:28.990158 1157708 cri.go:89] found id: ""
	I0318 13:52:28.990185 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.990193 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:28.990199 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:28.990251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:29.035089 1157708 cri.go:89] found id: ""
	I0318 13:52:29.035123 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.035134 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:29.035143 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:29.035209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:29.076991 1157708 cri.go:89] found id: ""
	I0318 13:52:29.077022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.077033 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:29.077041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:29.077104 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:29.117106 1157708 cri.go:89] found id: ""
	I0318 13:52:29.117134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.117150 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:29.117157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:29.117209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:29.159675 1157708 cri.go:89] found id: ""
	I0318 13:52:29.159704 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.159714 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:29.159722 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:29.159787 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:29.202130 1157708 cri.go:89] found id: ""
	I0318 13:52:29.202157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.202166 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:29.202176 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:29.202189 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:29.258343 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:29.258390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:29.275314 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:29.275360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:29.359842 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:29.359989 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:29.360036 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:29.446021 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:29.446072 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:31.990431 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:32.007443 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:32.007508 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:32.051028 1157708 cri.go:89] found id: ""
	I0318 13:52:32.051061 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.051070 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:32.051076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:32.051144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:32.092914 1157708 cri.go:89] found id: ""
	I0318 13:52:32.092950 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.092962 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:32.092972 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:32.093045 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:32.154257 1157708 cri.go:89] found id: ""
	I0318 13:52:32.154291 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.154302 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:32.154309 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:32.154375 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:32.200185 1157708 cri.go:89] found id: ""
	I0318 13:52:32.200224 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.200236 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:32.200244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:32.200309 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:32.248927 1157708 cri.go:89] found id: ""
	I0318 13:52:32.248961 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.248974 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:32.248982 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:32.249051 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:32.289829 1157708 cri.go:89] found id: ""
	I0318 13:52:32.289861 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.289870 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:32.289876 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:32.289934 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:32.334346 1157708 cri.go:89] found id: ""
	I0318 13:52:32.334379 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.334387 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:32.334393 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:32.334457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:32.378718 1157708 cri.go:89] found id: ""
	I0318 13:52:32.378761 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.378770 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:32.378780 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:32.378795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:32.434626 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:32.434667 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:32.451366 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:32.451402 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:32.532868 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:32.532907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:32.532924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:32.617556 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:32.617597 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:29.650409 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.651019 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:30.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:32.214101 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:34.710602 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:33.807499 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.807738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.165067 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:35.181325 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:35.181404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:35.220570 1157708 cri.go:89] found id: ""
	I0318 13:52:35.220601 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.220612 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:35.220619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:35.220684 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:35.263798 1157708 cri.go:89] found id: ""
	I0318 13:52:35.263830 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.263841 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:35.263848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:35.263915 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:35.309447 1157708 cri.go:89] found id: ""
	I0318 13:52:35.309477 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.309489 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:35.309497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:35.309567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:35.353444 1157708 cri.go:89] found id: ""
	I0318 13:52:35.353472 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.353484 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:35.353493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:35.353556 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:35.394563 1157708 cri.go:89] found id: ""
	I0318 13:52:35.394591 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.394599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:35.394604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:35.394662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:35.433866 1157708 cri.go:89] found id: ""
	I0318 13:52:35.433899 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.433908 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:35.433915 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:35.433970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:35.482769 1157708 cri.go:89] found id: ""
	I0318 13:52:35.482808 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.482820 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:35.482829 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:35.482899 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:35.521465 1157708 cri.go:89] found id: ""
	I0318 13:52:35.521498 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.521509 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:35.521520 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:35.521534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:35.577759 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:35.577799 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:35.593052 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:35.593084 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:35.672751 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:35.672773 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:35.672787 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:35.752118 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:35.752171 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:34.157429 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:36.650725 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.652096 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:37.209435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:39.710020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.312679 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:40.807379 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.296677 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:38.312261 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:38.312365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:38.350328 1157708 cri.go:89] found id: ""
	I0318 13:52:38.350362 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.350374 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:38.350382 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:38.350457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:38.389891 1157708 cri.go:89] found id: ""
	I0318 13:52:38.389927 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.389939 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:38.389947 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:38.390005 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:38.430268 1157708 cri.go:89] found id: ""
	I0318 13:52:38.430296 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.430305 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:38.430311 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:38.430365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:38.470830 1157708 cri.go:89] found id: ""
	I0318 13:52:38.470859 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.470873 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:38.470880 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:38.470945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:38.510501 1157708 cri.go:89] found id: ""
	I0318 13:52:38.510538 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.510552 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:38.510560 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:38.510618 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:38.594899 1157708 cri.go:89] found id: ""
	I0318 13:52:38.594926 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.594935 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:38.594942 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:38.595021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:38.649095 1157708 cri.go:89] found id: ""
	I0318 13:52:38.649121 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.649129 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:38.649136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:38.649192 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:38.695263 1157708 cri.go:89] found id: ""
	I0318 13:52:38.695295 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.695307 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:38.695320 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:38.695336 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:38.780624 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:38.780666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:38.825294 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:38.825335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:38.877548 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:38.877596 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:38.893289 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:38.893319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:38.971752 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.472865 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:41.487371 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:41.487484 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:41.524691 1157708 cri.go:89] found id: ""
	I0318 13:52:41.524724 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.524737 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:41.524746 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:41.524812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:41.564094 1157708 cri.go:89] found id: ""
	I0318 13:52:41.564125 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.564137 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:41.564145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:41.564210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:41.600019 1157708 cri.go:89] found id: ""
	I0318 13:52:41.600047 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.600058 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:41.600064 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:41.600142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:41.638320 1157708 cri.go:89] found id: ""
	I0318 13:52:41.638350 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.638363 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:41.638372 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:41.638438 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:41.680763 1157708 cri.go:89] found id: ""
	I0318 13:52:41.680798 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.680810 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:41.680818 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:41.680894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:41.720645 1157708 cri.go:89] found id: ""
	I0318 13:52:41.720674 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.720683 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:41.720690 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:41.720741 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:41.759121 1157708 cri.go:89] found id: ""
	I0318 13:52:41.759151 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.759185 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:41.759195 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:41.759264 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:41.797006 1157708 cri.go:89] found id: ""
	I0318 13:52:41.797034 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.797043 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:41.797053 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:41.797070 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:41.853315 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:41.853353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:41.869920 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:41.869952 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:41.947187 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.947219 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:41.947235 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:42.025475 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:42.025515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:41.151466 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.153616 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:42.207999 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.709760 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.310812 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:45.808394 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.574724 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:44.598990 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:44.599068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:44.649051 1157708 cri.go:89] found id: ""
	I0318 13:52:44.649137 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.649168 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:44.649180 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:44.649254 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:44.686423 1157708 cri.go:89] found id: ""
	I0318 13:52:44.686459 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.686468 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:44.686473 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:44.686536 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:44.726534 1157708 cri.go:89] found id: ""
	I0318 13:52:44.726564 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.726575 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:44.726583 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:44.726653 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:44.771190 1157708 cri.go:89] found id: ""
	I0318 13:52:44.771220 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.771232 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:44.771240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:44.771311 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:44.811577 1157708 cri.go:89] found id: ""
	I0318 13:52:44.811602 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.811611 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:44.811618 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:44.811677 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:44.850717 1157708 cri.go:89] found id: ""
	I0318 13:52:44.850744 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.850756 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:44.850765 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:44.850824 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:44.890294 1157708 cri.go:89] found id: ""
	I0318 13:52:44.890321 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.890330 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:44.890344 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:44.890401 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:44.930690 1157708 cri.go:89] found id: ""
	I0318 13:52:44.930720 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.930730 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:44.930741 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:44.930757 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:44.946509 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:44.946544 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:45.029748 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:45.029777 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:45.029795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:45.111348 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:45.111392 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:45.165156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:45.165193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:47.720701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:47.734457 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:47.734520 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:47.771273 1157708 cri.go:89] found id: ""
	I0318 13:52:47.771304 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.771313 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:47.771319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:47.771370 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:47.813779 1157708 cri.go:89] found id: ""
	I0318 13:52:47.813806 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.813816 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:47.813824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:47.813892 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:47.855547 1157708 cri.go:89] found id: ""
	I0318 13:52:47.855576 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.855584 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:47.855590 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:47.855640 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:47.892651 1157708 cri.go:89] found id: ""
	I0318 13:52:47.892684 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.892692 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:47.892697 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:47.892752 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:47.935457 1157708 cri.go:89] found id: ""
	I0318 13:52:47.935488 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.935498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:47.935505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:47.935567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:47.969335 1157708 cri.go:89] found id: ""
	I0318 13:52:47.969361 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.969370 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:47.969377 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:47.969441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:45.651171 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.151833 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:47.209014 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:49.710231 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.310467 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:50.807495 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.007305 1157708 cri.go:89] found id: ""
	I0318 13:52:48.007339 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.007349 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:48.007355 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:48.007416 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:48.050230 1157708 cri.go:89] found id: ""
	I0318 13:52:48.050264 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.050276 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:48.050289 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:48.050304 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:48.106946 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:48.106993 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:48.123805 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:48.123837 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:48.201881 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:48.201907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:48.201920 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:48.281533 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:48.281577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:50.829561 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:50.847462 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:50.847555 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:50.889731 1157708 cri.go:89] found id: ""
	I0318 13:52:50.889759 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.889768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:50.889774 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:50.889831 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:50.928176 1157708 cri.go:89] found id: ""
	I0318 13:52:50.928210 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.928222 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:50.928231 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:50.928294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:50.965737 1157708 cri.go:89] found id: ""
	I0318 13:52:50.965772 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.965786 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:50.965794 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:50.965866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:51.008038 1157708 cri.go:89] found id: ""
	I0318 13:52:51.008072 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.008081 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:51.008087 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:51.008159 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:51.050310 1157708 cri.go:89] found id: ""
	I0318 13:52:51.050340 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.050355 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:51.050363 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:51.050431 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:51.090514 1157708 cri.go:89] found id: ""
	I0318 13:52:51.090541 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.090550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:51.090556 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:51.090608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:51.131278 1157708 cri.go:89] found id: ""
	I0318 13:52:51.131305 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.131313 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:51.131320 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:51.131381 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:51.173370 1157708 cri.go:89] found id: ""
	I0318 13:52:51.173400 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.173411 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:51.173437 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:51.173464 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:51.260155 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:51.260204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:51.309963 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:51.309998 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:51.367838 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:51.367889 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:51.382542 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:51.382570 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:51.459258 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:50.650524 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.651804 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.208655 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:54.209701 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.808292 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:55.309417 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:53.960212 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:53.978939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:53.979004 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:54.030003 1157708 cri.go:89] found id: ""
	I0318 13:52:54.030038 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.030052 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:54.030060 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:54.030134 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:54.073487 1157708 cri.go:89] found id: ""
	I0318 13:52:54.073523 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.073535 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:54.073543 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:54.073611 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:54.115982 1157708 cri.go:89] found id: ""
	I0318 13:52:54.116010 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.116022 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:54.116029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:54.116099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:54.158320 1157708 cri.go:89] found id: ""
	I0318 13:52:54.158348 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.158359 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:54.158366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:54.158433 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:54.198911 1157708 cri.go:89] found id: ""
	I0318 13:52:54.198939 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.198948 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:54.198955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:54.199010 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:54.240628 1157708 cri.go:89] found id: ""
	I0318 13:52:54.240659 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.240671 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:54.240679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:54.240750 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:54.279377 1157708 cri.go:89] found id: ""
	I0318 13:52:54.279409 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.279418 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:54.279424 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:54.279493 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:54.324160 1157708 cri.go:89] found id: ""
	I0318 13:52:54.324192 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.324205 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:54.324218 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:54.324237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:54.371487 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:54.371527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:54.423487 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:54.423526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:54.438773 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:54.438800 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:54.518788 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:54.518810 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:54.518825 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.103590 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:57.118866 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:57.118932 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:57.159354 1157708 cri.go:89] found id: ""
	I0318 13:52:57.159383 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.159393 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:57.159399 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:57.159458 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:57.201114 1157708 cri.go:89] found id: ""
	I0318 13:52:57.201148 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.201159 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:57.201167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:57.201233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:57.242172 1157708 cri.go:89] found id: ""
	I0318 13:52:57.242207 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.242217 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:57.242224 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:57.242287 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:57.282578 1157708 cri.go:89] found id: ""
	I0318 13:52:57.282617 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.282629 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:57.282637 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:57.282706 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:57.323682 1157708 cri.go:89] found id: ""
	I0318 13:52:57.323707 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.323715 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:57.323721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:57.323771 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:57.364946 1157708 cri.go:89] found id: ""
	I0318 13:52:57.364980 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.364991 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:57.365003 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:57.365076 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:57.407466 1157708 cri.go:89] found id: ""
	I0318 13:52:57.407495 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.407505 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:57.407511 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:57.407568 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:57.454663 1157708 cri.go:89] found id: ""
	I0318 13:52:57.454692 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.454701 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:57.454710 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:57.454722 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:57.509591 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:57.509633 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:57.525125 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:57.525155 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:57.602819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:57.602845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:57.602863 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.689001 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:57.689045 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:55.150589 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.152149 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:56.708493 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.208099 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.311780 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.312048 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:00.234252 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:00.249526 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:00.249615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:00.290131 1157708 cri.go:89] found id: ""
	I0318 13:53:00.290160 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.290171 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:00.290178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:00.290230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:00.337794 1157708 cri.go:89] found id: ""
	I0318 13:53:00.337828 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.337840 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:00.337848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:00.337907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:00.378188 1157708 cri.go:89] found id: ""
	I0318 13:53:00.378224 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.378236 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:00.378244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:00.378313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:00.418940 1157708 cri.go:89] found id: ""
	I0318 13:53:00.418972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.418981 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:00.418987 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:00.419039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:00.461471 1157708 cri.go:89] found id: ""
	I0318 13:53:00.461502 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.461511 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:00.461518 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:00.461572 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:00.498781 1157708 cri.go:89] found id: ""
	I0318 13:53:00.498812 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.498821 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:00.498827 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:00.498885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:00.540359 1157708 cri.go:89] found id: ""
	I0318 13:53:00.540395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.540407 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:00.540414 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:00.540480 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:00.583597 1157708 cri.go:89] found id: ""
	I0318 13:53:00.583628 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.583636 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:00.583648 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:00.583666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:00.639498 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:00.639534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:00.655764 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:00.655792 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:00.742351 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:00.742386 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:00.742400 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:00.825250 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:00.825298 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:59.651495 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.651843 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.709438 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.208439 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.810519 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.308525 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:03.373938 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:03.389723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:03.389796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:03.429675 1157708 cri.go:89] found id: ""
	I0318 13:53:03.429710 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.429723 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:03.429732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:03.429803 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:03.468732 1157708 cri.go:89] found id: ""
	I0318 13:53:03.468768 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.468780 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:03.468788 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:03.468841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:03.510562 1157708 cri.go:89] found id: ""
	I0318 13:53:03.510589 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.510598 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:03.510604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:03.510667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:03.549842 1157708 cri.go:89] found id: ""
	I0318 13:53:03.549896 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.549909 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:03.549918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:03.549984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:03.590036 1157708 cri.go:89] found id: ""
	I0318 13:53:03.590076 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.590086 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:03.590093 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:03.590146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:03.635546 1157708 cri.go:89] found id: ""
	I0318 13:53:03.635573 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.635585 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:03.635593 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:03.635660 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:03.678634 1157708 cri.go:89] found id: ""
	I0318 13:53:03.678663 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.678671 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:03.678677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:03.678735 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:03.719666 1157708 cri.go:89] found id: ""
	I0318 13:53:03.719698 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.719709 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:03.719721 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:03.719736 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:03.762353 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:03.762388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:03.817484 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:03.817521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:03.832820 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:03.832850 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:03.913094 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:03.913115 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:03.913130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:06.502556 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:06.517682 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:06.517745 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:06.562167 1157708 cri.go:89] found id: ""
	I0318 13:53:06.562202 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.562215 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:06.562223 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:06.562294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:06.601910 1157708 cri.go:89] found id: ""
	I0318 13:53:06.601945 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.601954 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:06.601962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:06.602022 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:06.640652 1157708 cri.go:89] found id: ""
	I0318 13:53:06.640683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.640694 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:06.640702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:06.640778 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:06.686781 1157708 cri.go:89] found id: ""
	I0318 13:53:06.686809 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.686818 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:06.686824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:06.686893 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:06.727080 1157708 cri.go:89] found id: ""
	I0318 13:53:06.727107 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.727115 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:06.727121 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:06.727173 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:06.764550 1157708 cri.go:89] found id: ""
	I0318 13:53:06.764575 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.764583 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:06.764589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:06.764641 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:06.803978 1157708 cri.go:89] found id: ""
	I0318 13:53:06.804009 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.804019 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:06.804027 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:06.804091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:06.843983 1157708 cri.go:89] found id: ""
	I0318 13:53:06.844016 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.844027 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:06.844040 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:06.844058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:06.905389 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:06.905424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:06.956888 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:06.956924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:06.973551 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:06.973594 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:07.045945 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:07.045973 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:07.045991 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:04.150852 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.151454 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.656073 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.211223 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.707939 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.808218 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.309991 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:11.310190 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.635227 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:09.650166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:09.650246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:09.695126 1157708 cri.go:89] found id: ""
	I0318 13:53:09.695153 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.695162 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:09.695168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:09.695221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:09.740475 1157708 cri.go:89] found id: ""
	I0318 13:53:09.740507 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.740516 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:09.740522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:09.740591 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:09.779078 1157708 cri.go:89] found id: ""
	I0318 13:53:09.779108 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.779119 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:09.779128 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:09.779186 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:09.821252 1157708 cri.go:89] found id: ""
	I0318 13:53:09.821285 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.821297 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:09.821306 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:09.821376 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:09.860500 1157708 cri.go:89] found id: ""
	I0318 13:53:09.860537 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.860550 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:09.860558 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:09.860622 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:09.903447 1157708 cri.go:89] found id: ""
	I0318 13:53:09.903475 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.903486 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:09.903494 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:09.903550 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:09.941620 1157708 cri.go:89] found id: ""
	I0318 13:53:09.941648 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.941661 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:09.941679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:09.941731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:09.980066 1157708 cri.go:89] found id: ""
	I0318 13:53:09.980101 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.980113 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:09.980125 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:09.980142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:10.036960 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:10.037000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:10.051329 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:10.051361 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:10.130896 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:10.130925 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:10.130942 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:10.212205 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:10.212236 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:12.754623 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:12.769956 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:12.770034 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:12.809006 1157708 cri.go:89] found id: ""
	I0318 13:53:12.809032 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.809043 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:12.809051 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:12.809113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:12.852354 1157708 cri.go:89] found id: ""
	I0318 13:53:12.852390 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.852400 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:12.852407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:12.852476 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:12.891891 1157708 cri.go:89] found id: ""
	I0318 13:53:12.891923 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.891933 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:12.891940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:12.891991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:12.931753 1157708 cri.go:89] found id: ""
	I0318 13:53:12.931785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.931795 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:12.931803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:12.931872 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:12.971622 1157708 cri.go:89] found id: ""
	I0318 13:53:12.971653 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.971662 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:12.971669 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:12.971731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:11.151234 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.157081 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:10.708177 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.209203 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.315183 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.808738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.009893 1157708 cri.go:89] found id: ""
	I0318 13:53:13.009930 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.009943 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:13.009952 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:13.010021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:13.045361 1157708 cri.go:89] found id: ""
	I0318 13:53:13.045396 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.045404 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:13.045411 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:13.045474 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:13.087659 1157708 cri.go:89] found id: ""
	I0318 13:53:13.087686 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.087696 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:13.087706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:13.087721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:13.129979 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:13.130014 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:13.183802 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:13.183836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:13.198808 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:13.198840 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:13.272736 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:13.272764 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:13.272783 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:15.870196 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:15.887480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:15.887551 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:15.923871 1157708 cri.go:89] found id: ""
	I0318 13:53:15.923899 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.923907 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:15.923913 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:15.923976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:15.963870 1157708 cri.go:89] found id: ""
	I0318 13:53:15.963906 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.963917 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:15.963925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:15.963997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:16.009781 1157708 cri.go:89] found id: ""
	I0318 13:53:16.009815 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.009828 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:16.009837 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:16.009905 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:16.047673 1157708 cri.go:89] found id: ""
	I0318 13:53:16.047708 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.047718 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:16.047727 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:16.047793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:16.089419 1157708 cri.go:89] found id: ""
	I0318 13:53:16.089447 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.089455 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:16.089461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:16.089511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:16.133563 1157708 cri.go:89] found id: ""
	I0318 13:53:16.133594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.133604 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:16.133611 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:16.133685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:16.174369 1157708 cri.go:89] found id: ""
	I0318 13:53:16.174404 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.174415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:16.174423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:16.174491 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:16.219334 1157708 cri.go:89] found id: ""
	I0318 13:53:16.219360 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.219367 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:16.219376 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:16.219389 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:16.273468 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:16.273507 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:16.288584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:16.288612 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:16.366575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:16.366602 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:16.366620 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:16.451031 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:16.451071 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:15.650907 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.151434 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.708015 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:17.710036 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.311437 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.807854 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.997536 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:19.014995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:19.015065 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:19.064686 1157708 cri.go:89] found id: ""
	I0318 13:53:19.064719 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.064731 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:19.064739 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:19.064793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:19.110598 1157708 cri.go:89] found id: ""
	I0318 13:53:19.110629 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.110640 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:19.110648 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:19.110739 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:19.156628 1157708 cri.go:89] found id: ""
	I0318 13:53:19.156652 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.156660 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:19.156668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:19.156730 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:19.205993 1157708 cri.go:89] found id: ""
	I0318 13:53:19.206029 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.206042 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:19.206049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:19.206118 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:19.253902 1157708 cri.go:89] found id: ""
	I0318 13:53:19.253935 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.253952 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:19.253960 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:19.254036 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:19.296550 1157708 cri.go:89] found id: ""
	I0318 13:53:19.296583 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.296594 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:19.296602 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:19.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:19.337316 1157708 cri.go:89] found id: ""
	I0318 13:53:19.337349 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.337360 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:19.337369 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:19.337446 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:19.381503 1157708 cri.go:89] found id: ""
	I0318 13:53:19.381546 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.381565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:19.381579 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:19.381603 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:19.461665 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:19.461691 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:19.461707 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:19.548291 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:19.548348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:19.591296 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:19.591335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:19.648740 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:19.648776 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.164970 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:22.180740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:22.180806 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:22.223787 1157708 cri.go:89] found id: ""
	I0318 13:53:22.223820 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.223833 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:22.223840 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:22.223908 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:22.266751 1157708 cri.go:89] found id: ""
	I0318 13:53:22.266785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.266797 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:22.266805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:22.266876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:22.311669 1157708 cri.go:89] found id: ""
	I0318 13:53:22.311701 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.311712 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:22.311721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:22.311816 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:22.354687 1157708 cri.go:89] found id: ""
	I0318 13:53:22.354722 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.354733 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:22.354742 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:22.354807 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:22.395741 1157708 cri.go:89] found id: ""
	I0318 13:53:22.395767 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.395776 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:22.395782 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:22.395832 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:22.434506 1157708 cri.go:89] found id: ""
	I0318 13:53:22.434539 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.434550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:22.434559 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:22.434612 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:22.474583 1157708 cri.go:89] found id: ""
	I0318 13:53:22.474612 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.474621 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:22.474627 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:22.474690 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:22.521898 1157708 cri.go:89] found id: ""
	I0318 13:53:22.521943 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.521955 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:22.521968 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:22.521989 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.537679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:22.537711 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:22.619575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:22.619605 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:22.619621 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:22.704206 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:22.704265 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:22.753470 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:22.753502 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:20.650340 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.653036 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.213398 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.709150 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.808837 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.308831 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.311578 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:25.329917 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:25.329979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:25.373784 1157708 cri.go:89] found id: ""
	I0318 13:53:25.373818 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.373826 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:25.373833 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:25.373901 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:25.422490 1157708 cri.go:89] found id: ""
	I0318 13:53:25.422516 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.422526 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:25.422532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:25.422597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:25.459523 1157708 cri.go:89] found id: ""
	I0318 13:53:25.459552 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.459560 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:25.459567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:25.459627 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:25.495647 1157708 cri.go:89] found id: ""
	I0318 13:53:25.495683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.495695 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:25.495702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:25.495772 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:25.534582 1157708 cri.go:89] found id: ""
	I0318 13:53:25.534617 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.534626 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:25.534632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:25.534704 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:25.577526 1157708 cri.go:89] found id: ""
	I0318 13:53:25.577558 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.577566 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:25.577573 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:25.577687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:25.616403 1157708 cri.go:89] found id: ""
	I0318 13:53:25.616433 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.616445 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:25.616453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:25.616527 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:25.660444 1157708 cri.go:89] found id: ""
	I0318 13:53:25.660474 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.660482 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:25.660492 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:25.660506 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:25.715595 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:25.715641 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:25.730358 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:25.730390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:25.803153 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:25.803239 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:25.803261 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:25.885339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:25.885388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:25.150276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.151389 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.214042 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.710185 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.807095 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:29.807177 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:28.433506 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:28.449402 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:28.449481 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:28.490972 1157708 cri.go:89] found id: ""
	I0318 13:53:28.491007 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.491019 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:28.491028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:28.491094 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:28.531406 1157708 cri.go:89] found id: ""
	I0318 13:53:28.531439 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.531451 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:28.531460 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:28.531513 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:28.570299 1157708 cri.go:89] found id: ""
	I0318 13:53:28.570334 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.570345 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:28.570352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:28.570408 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:28.607950 1157708 cri.go:89] found id: ""
	I0318 13:53:28.607979 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.607987 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:28.607994 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:28.608066 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:28.648710 1157708 cri.go:89] found id: ""
	I0318 13:53:28.648744 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.648755 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:28.648762 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:28.648830 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:28.691071 1157708 cri.go:89] found id: ""
	I0318 13:53:28.691102 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.691114 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:28.691122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:28.691183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:28.734399 1157708 cri.go:89] found id: ""
	I0318 13:53:28.734438 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.734452 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:28.734461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:28.734548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:28.774859 1157708 cri.go:89] found id: ""
	I0318 13:53:28.774891 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.774902 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:28.774912 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:28.774927 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:28.831420 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:28.831459 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:28.847970 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:28.848008 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:28.926007 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:28.926034 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:28.926051 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:29.007525 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:29.007577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:31.555401 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:31.570964 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:31.571046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:31.611400 1157708 cri.go:89] found id: ""
	I0318 13:53:31.611427 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.611438 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:31.611445 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:31.611510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:31.654572 1157708 cri.go:89] found id: ""
	I0318 13:53:31.654602 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.654614 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:31.654622 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:31.654725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:31.692649 1157708 cri.go:89] found id: ""
	I0318 13:53:31.692673 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.692681 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:31.692686 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:31.692748 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:31.732208 1157708 cri.go:89] found id: ""
	I0318 13:53:31.732233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.732244 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:31.732253 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:31.732320 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:31.774132 1157708 cri.go:89] found id: ""
	I0318 13:53:31.774163 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.774172 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:31.774178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:31.774234 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:31.813558 1157708 cri.go:89] found id: ""
	I0318 13:53:31.813582 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.813590 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:31.813597 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:31.813651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:31.862024 1157708 cri.go:89] found id: ""
	I0318 13:53:31.862057 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.862070 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:31.862077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:31.862146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:31.903941 1157708 cri.go:89] found id: ""
	I0318 13:53:31.903972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.903982 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:31.903992 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:31.904006 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:31.957327 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:31.957366 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:31.973337 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:31.973380 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:32.053702 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:32.053730 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:32.053744 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:32.134859 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:32.134911 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:29.649648 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.651426 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.651936 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:30.208512 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:32.709020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.808276 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.811370 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:36.314374 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:34.683335 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:34.700383 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:34.700490 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:34.744387 1157708 cri.go:89] found id: ""
	I0318 13:53:34.744420 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.744432 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:34.744441 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:34.744509 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:34.788122 1157708 cri.go:89] found id: ""
	I0318 13:53:34.788150 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.788160 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:34.788166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:34.788221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:34.834760 1157708 cri.go:89] found id: ""
	I0318 13:53:34.834795 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.834808 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:34.834817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:34.834894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:34.882028 1157708 cri.go:89] found id: ""
	I0318 13:53:34.882062 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.882073 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:34.882081 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:34.882150 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:34.933339 1157708 cri.go:89] found id: ""
	I0318 13:53:34.933364 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.933374 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:34.933384 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:34.933451 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:34.972362 1157708 cri.go:89] found id: ""
	I0318 13:53:34.972395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.972407 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:34.972416 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:34.972486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:35.008949 1157708 cri.go:89] found id: ""
	I0318 13:53:35.008986 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.008999 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:35.009007 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:35.009080 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:35.054698 1157708 cri.go:89] found id: ""
	I0318 13:53:35.054733 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.054742 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:35.054756 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:35.054770 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:35.109391 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:35.109450 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:35.126785 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:35.126818 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:35.214303 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:35.214329 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:35.214342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:35.298705 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:35.298750 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:37.843701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:37.859330 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:37.859415 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:37.903428 1157708 cri.go:89] found id: ""
	I0318 13:53:37.903466 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.903479 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:37.903497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:37.903560 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:37.943687 1157708 cri.go:89] found id: ""
	I0318 13:53:37.943716 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.943727 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:37.943735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:37.943804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:37.986201 1157708 cri.go:89] found id: ""
	I0318 13:53:37.986233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.986244 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:37.986252 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:37.986322 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:36.151976 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.152281 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:35.209205 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:37.709122 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.806794 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.807552 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.026776 1157708 cri.go:89] found id: ""
	I0318 13:53:38.026813 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.026825 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:38.026832 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:38.026907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:38.073057 1157708 cri.go:89] found id: ""
	I0318 13:53:38.073088 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.073098 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:38.073105 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:38.073172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:38.110576 1157708 cri.go:89] found id: ""
	I0318 13:53:38.110611 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.110624 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:38.110632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:38.110702 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:38.154293 1157708 cri.go:89] found id: ""
	I0318 13:53:38.154319 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.154327 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:38.154338 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:38.154414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:38.195407 1157708 cri.go:89] found id: ""
	I0318 13:53:38.195434 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.195444 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:38.195454 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:38.195469 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:38.254159 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:38.254210 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:38.269143 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:38.269175 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:38.349819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:38.349845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:38.349864 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:38.435121 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:38.435164 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.982438 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:40.998483 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:40.998559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:41.037470 1157708 cri.go:89] found id: ""
	I0318 13:53:41.037497 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.037506 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:41.037512 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:41.037583 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:41.078428 1157708 cri.go:89] found id: ""
	I0318 13:53:41.078463 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.078473 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:41.078482 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:41.078548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:41.121342 1157708 cri.go:89] found id: ""
	I0318 13:53:41.121371 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.121382 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:41.121391 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:41.121482 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:41.164124 1157708 cri.go:89] found id: ""
	I0318 13:53:41.164149 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.164159 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:41.164167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:41.164229 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:41.210294 1157708 cri.go:89] found id: ""
	I0318 13:53:41.210321 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.210329 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:41.210336 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:41.210407 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:41.253934 1157708 cri.go:89] found id: ""
	I0318 13:53:41.253957 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.253967 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:41.253973 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:41.254039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:41.298817 1157708 cri.go:89] found id: ""
	I0318 13:53:41.298849 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.298861 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:41.298870 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:41.298936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:41.344109 1157708 cri.go:89] found id: ""
	I0318 13:53:41.344137 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.344146 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:41.344156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:41.344170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:41.401026 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:41.401061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:41.416197 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:41.416229 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:41.495349 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:41.495375 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:41.495393 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:41.578201 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:41.578253 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.651687 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:43.152619 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.208445 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.208613 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.210573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.808665 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:45.309099 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.126601 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:44.140971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:44.141048 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:44.184758 1157708 cri.go:89] found id: ""
	I0318 13:53:44.184786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.184794 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:44.184801 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:44.184851 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:44.230793 1157708 cri.go:89] found id: ""
	I0318 13:53:44.230824 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.230836 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:44.230842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:44.230916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:44.269561 1157708 cri.go:89] found id: ""
	I0318 13:53:44.269594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.269606 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:44.269614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:44.269680 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:44.310847 1157708 cri.go:89] found id: ""
	I0318 13:53:44.310878 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.310889 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:44.310898 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:44.310970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:44.350827 1157708 cri.go:89] found id: ""
	I0318 13:53:44.350860 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.350878 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:44.350887 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:44.350956 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:44.389693 1157708 cri.go:89] found id: ""
	I0318 13:53:44.389721 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.389730 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:44.389735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:44.389804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:44.429254 1157708 cri.go:89] found id: ""
	I0318 13:53:44.429280 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.429289 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:44.429303 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:44.429354 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:44.468484 1157708 cri.go:89] found id: ""
	I0318 13:53:44.468513 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.468525 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:44.468538 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:44.468555 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:44.525012 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:44.525058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:44.541638 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:44.541668 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:44.621779 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:44.621801 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:44.621814 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:44.706797 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:44.706884 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:47.253569 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:47.268808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:47.268888 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:47.313191 1157708 cri.go:89] found id: ""
	I0318 13:53:47.313220 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.313232 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:47.313240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:47.313307 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:47.357567 1157708 cri.go:89] found id: ""
	I0318 13:53:47.357600 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.357611 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:47.357619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:47.357688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:47.392300 1157708 cri.go:89] found id: ""
	I0318 13:53:47.392341 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.392352 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:47.392366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:47.392437 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:47.432800 1157708 cri.go:89] found id: ""
	I0318 13:53:47.432830 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.432842 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:47.432857 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:47.432921 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:47.469563 1157708 cri.go:89] found id: ""
	I0318 13:53:47.469591 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.469599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:47.469605 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:47.469668 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:47.508770 1157708 cri.go:89] found id: ""
	I0318 13:53:47.508799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.508810 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:47.508820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:47.508880 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:47.549876 1157708 cri.go:89] found id: ""
	I0318 13:53:47.549909 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.549921 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:47.549930 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:47.549997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:47.591385 1157708 cri.go:89] found id: ""
	I0318 13:53:47.591413 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.591421 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:47.591431 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:47.591446 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:47.646284 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:47.646313 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:47.662609 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:47.662639 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:47.737371 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:47.737398 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:47.737415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:47.817311 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:47.817342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:45.652845 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.150199 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:46.707734 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.709977 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:47.807238 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.308767 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:50.380029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:50.380109 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:50.427452 1157708 cri.go:89] found id: ""
	I0318 13:53:50.427484 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.427496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:50.427505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:50.427579 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:50.466766 1157708 cri.go:89] found id: ""
	I0318 13:53:50.466793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.466801 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:50.466808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:50.466894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:50.506768 1157708 cri.go:89] found id: ""
	I0318 13:53:50.506799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.506811 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:50.506819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:50.506882 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:50.545554 1157708 cri.go:89] found id: ""
	I0318 13:53:50.545592 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.545605 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:50.545613 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:50.545685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:50.583949 1157708 cri.go:89] found id: ""
	I0318 13:53:50.583984 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.583995 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:50.584004 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:50.584083 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:50.624730 1157708 cri.go:89] found id: ""
	I0318 13:53:50.624763 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.624774 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:50.624783 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:50.624853 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:50.664300 1157708 cri.go:89] found id: ""
	I0318 13:53:50.664346 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.664358 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:50.664366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:50.664420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:50.702760 1157708 cri.go:89] found id: ""
	I0318 13:53:50.702793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.702805 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:50.702817 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:50.702833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:50.757188 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:50.757237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:50.772151 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:50.772195 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:50.856872 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:50.856898 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:50.856917 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:50.937706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:50.937749 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:50.654814 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.151970 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.710233 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.209443 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:52.309529 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:54.809399 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.481836 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:53.497792 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:53.497856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:53.535376 1157708 cri.go:89] found id: ""
	I0318 13:53:53.535411 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.535420 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:53.535427 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:53.535486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:53.575002 1157708 cri.go:89] found id: ""
	I0318 13:53:53.575030 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.575042 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:53.575050 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:53.575119 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:53.615880 1157708 cri.go:89] found id: ""
	I0318 13:53:53.615919 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.615931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:53.615940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:53.616007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:53.681746 1157708 cri.go:89] found id: ""
	I0318 13:53:53.681786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.681799 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:53.681810 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:53.681887 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:53.725219 1157708 cri.go:89] found id: ""
	I0318 13:53:53.725241 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.725250 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:53.725256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:53.725317 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:53.766969 1157708 cri.go:89] found id: ""
	I0318 13:53:53.767006 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.767018 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:53.767026 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:53.767091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:53.802103 1157708 cri.go:89] found id: ""
	I0318 13:53:53.802134 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.802145 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:53.802157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:53.802210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:53.843054 1157708 cri.go:89] found id: ""
	I0318 13:53:53.843085 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.843093 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:53.843103 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:53.843117 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:53.899794 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:53.899836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:53.915559 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:53.915592 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:53.996410 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:53.996438 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:53.996456 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:54.085588 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:54.085628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:56.632201 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:56.648183 1157708 kubeadm.go:591] duration metric: took 4m3.550073086s to restartPrimaryControlPlane
	W0318 13:53:56.648381 1157708 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:53:56.648422 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:53:55.152626 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.650951 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:55.209511 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.709324 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.710029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.666187 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.017736279s)
	I0318 13:53:59.666270 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:53:59.682887 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:53:59.694626 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:53:59.706577 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:53:59.706599 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:53:59.706648 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:53:59.718311 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:53:59.718371 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:53:59.729298 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:53:59.741351 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:53:59.741401 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:53:59.753652 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.765642 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:53:59.765695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.778055 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:53:59.789994 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:53:59.790042 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:53:59.801292 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:53:59.879414 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:53:59.879516 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:00.046477 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:00.046660 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:00.046819 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:00.257070 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:00.259191 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:00.259333 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:00.259434 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:00.259549 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:00.259658 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:00.259782 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:00.259857 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:00.259949 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:00.260033 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:00.260136 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:00.260244 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:00.260299 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:00.260394 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:00.423400 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:00.543983 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:00.796108 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:00.901121 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:00.918891 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:00.920502 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:00.920642 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:01.094176 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:53:57.306878 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.308670 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:01.096397 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:54:01.096539 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:01.107816 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:01.108753 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:01.109641 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:01.111913 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:00.150985 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.151139 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.208577 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.209527 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.701940 1157416 pod_ready.go:81] duration metric: took 4m0.000915275s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:04.701995 1157416 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:04.702022 1157416 pod_ready.go:38] duration metric: took 4m12.048388069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:04.702063 1157416 kubeadm.go:591] duration metric: took 4m22.220919415s to restartPrimaryControlPlane
	W0318 13:54:04.702133 1157416 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:04.702168 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:01.807445 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.308435 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.151252 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.152296 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.162574 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.809148 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.811335 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:11.306999 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:10.650696 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:12.651741 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:13.308835 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.807754 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.150875 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:17.653698 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:18.308137 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.308720 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.152545 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.650685 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.807655 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:24.807765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:25.150664 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:27.650092 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:26.808311 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:29.311683 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:31.301320 1157887 pod_ready.go:81] duration metric: took 4m0.001048401s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:31.301351 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:31.301372 1157887 pod_ready.go:38] duration metric: took 4m12.063560637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:31.301397 1157887 kubeadm.go:591] duration metric: took 4m19.202321881s to restartPrimaryControlPlane
	W0318 13:54:31.301478 1157887 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:31.301505 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:29.651334 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:32.152059 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:34.651230 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.151130 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.018723 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.31652367s)
	I0318 13:54:37.018822 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:54:37.036348 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:54:37.047932 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:54:37.058846 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:54:37.058875 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:54:37.058920 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:54:37.069333 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:54:37.069396 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:54:37.080053 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:54:37.090110 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:54:37.090170 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:54:37.101032 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.111052 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:54:37.111124 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.121867 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:54:37.132057 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:54:37.132104 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:54:37.143057 1157416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:54:37.368813 1157416 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:54:41.111826 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:54:41.111977 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:41.112236 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:39.151250 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:41.652026 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:43.652929 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.082340 1157416 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 13:54:46.082410 1157416 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:46.082482 1157416 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:46.082561 1157416 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:46.082639 1157416 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:46.082692 1157416 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:46.084374 1157416 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:46.084495 1157416 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:46.084584 1157416 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:46.084681 1157416 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:46.084767 1157416 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:46.084844 1157416 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:46.084933 1157416 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:46.085039 1157416 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:46.085131 1157416 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:46.085255 1157416 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:46.085344 1157416 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:46.085415 1157416 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:46.085491 1157416 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:46.085569 1157416 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:46.085637 1157416 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 13:54:46.085704 1157416 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:46.085791 1157416 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:46.085894 1157416 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:46.086010 1157416 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:46.086104 1157416 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:54:46.087481 1157416 out.go:204]   - Booting up control plane ...
	I0318 13:54:46.087576 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:46.087642 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:46.087698 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:46.087782 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:46.087865 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:46.087917 1157416 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:46.088051 1157416 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:46.088146 1157416 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003020 seconds
	I0318 13:54:46.088306 1157416 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:54:46.088501 1157416 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:54:46.088585 1157416 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:54:46.088770 1157416 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-537236 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:54:46.088826 1157416 kubeadm.go:309] [bootstrap-token] Using token: fk6yfh.vd0dmh72kd97vm2h
	I0318 13:54:46.091265 1157416 out.go:204]   - Configuring RBAC rules ...
	I0318 13:54:46.091375 1157416 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:54:46.091449 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:54:46.091656 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:54:46.091839 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:54:46.092014 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:54:46.092136 1157416 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:54:46.092289 1157416 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:54:46.092370 1157416 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:54:46.092436 1157416 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:54:46.092445 1157416 kubeadm.go:309] 
	I0318 13:54:46.092513 1157416 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:54:46.092522 1157416 kubeadm.go:309] 
	I0318 13:54:46.092588 1157416 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:54:46.092594 1157416 kubeadm.go:309] 
	I0318 13:54:46.092614 1157416 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:54:46.092704 1157416 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:54:46.092749 1157416 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:54:46.092755 1157416 kubeadm.go:309] 
	I0318 13:54:46.092805 1157416 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:54:46.092818 1157416 kubeadm.go:309] 
	I0318 13:54:46.092892 1157416 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:54:46.092906 1157416 kubeadm.go:309] 
	I0318 13:54:46.092982 1157416 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:54:46.093100 1157416 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:54:46.093212 1157416 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:54:46.093225 1157416 kubeadm.go:309] 
	I0318 13:54:46.093335 1157416 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:54:46.093448 1157416 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:54:46.093457 1157416 kubeadm.go:309] 
	I0318 13:54:46.093539 1157416 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.093684 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:54:46.093717 1157416 kubeadm.go:309] 	--control-plane 
	I0318 13:54:46.093723 1157416 kubeadm.go:309] 
	I0318 13:54:46.093848 1157416 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:54:46.093860 1157416 kubeadm.go:309] 
	I0318 13:54:46.093946 1157416 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.094071 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:54:46.094105 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:54:46.094119 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:54:46.095717 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:54:46.112502 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:46.112797 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:46.152713 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:48.651676 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.096953 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:54:46.127007 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:54:46.178588 1157416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:54:46.178768 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:46.178785 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-537236 minikube.k8s.io/updated_at=2024_03_18T13_54_46_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=no-preload-537236 minikube.k8s.io/primary=true
	I0318 13:54:46.231974 1157416 ops.go:34] apiserver oom_adj: -16
	I0318 13:54:46.582048 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.082295 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.582447 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.082146 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.583155 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.082463 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.583104 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.153753 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:53.654740 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:50.082163 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:50.582159 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.082921 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.582616 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.082686 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.582520 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.082920 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.582281 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.082711 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.582110 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.112956 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:56.113210 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:55.082805 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:55.583034 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.082777 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.582491 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.082739 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.582854 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.082715 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.189802 1157416 kubeadm.go:1107] duration metric: took 12.011111335s to wait for elevateKubeSystemPrivileges
	W0318 13:54:58.189865 1157416 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:54:58.189878 1157416 kubeadm.go:393] duration metric: took 5m15.77131157s to StartCluster
	I0318 13:54:58.189991 1157416 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.190130 1157416 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:54:58.191965 1157416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.192315 1157416 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:54:58.194158 1157416 out.go:177] * Verifying Kubernetes components...
	I0318 13:54:58.192460 1157416 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:54:58.192549 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:54:58.194270 1157416 addons.go:69] Setting storage-provisioner=true in profile "no-preload-537236"
	I0318 13:54:58.195604 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:54:58.195628 1157416 addons.go:234] Setting addon storage-provisioner=true in "no-preload-537236"
	W0318 13:54:58.195646 1157416 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:54:58.194275 1157416 addons.go:69] Setting default-storageclass=true in profile "no-preload-537236"
	I0318 13:54:58.195741 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.195748 1157416 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-537236"
	I0318 13:54:58.194278 1157416 addons.go:69] Setting metrics-server=true in profile "no-preload-537236"
	I0318 13:54:58.195816 1157416 addons.go:234] Setting addon metrics-server=true in "no-preload-537236"
	W0318 13:54:58.195835 1157416 addons.go:243] addon metrics-server should already be in state true
	I0318 13:54:58.195864 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.196133 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196177 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196187 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196224 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196236 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196256 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.218212 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0318 13:54:58.218703 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34827
	I0318 13:54:58.218934 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0318 13:54:58.219717 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.219858 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220143 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220417 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220443 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220478 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220497 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220628 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220650 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220882 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220950 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220973 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.221491 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.221527 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.221736 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.222116 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.222138 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.226247 1157416 addons.go:234] Setting addon default-storageclass=true in "no-preload-537236"
	W0318 13:54:58.226271 1157416 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:54:58.226303 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.226691 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.226719 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.238772 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0318 13:54:58.239288 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.239925 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.239954 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.240375 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.240581 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.241297 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0318 13:54:58.241774 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.242300 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.242321 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.242787 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.243001 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.243033 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.245371 1157416 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:54:58.245038 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.246964 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:54:58.246981 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:54:58.246429 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0318 13:54:58.247010 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.248738 1157416 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:54:54.143902 1157263 pod_ready.go:81] duration metric: took 4m0.000627482s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:54.143947 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:54.143967 1157263 pod_ready.go:38] duration metric: took 4m9.565422592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:54.143994 1157263 kubeadm.go:591] duration metric: took 4m17.754456341s to restartPrimaryControlPlane
	W0318 13:54:54.144061 1157263 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:54.144092 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:58.247424 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.250418 1157416 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.250441 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:54:58.250459 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.250666 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.250683 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.250733 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251012 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.251354 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.251384 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251730 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.252053 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.252082 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.252627 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.252823 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.252974 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.253647 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254073 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.254102 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254393 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.254599 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.254720 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.254858 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.275785 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0318 13:54:58.276467 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.277007 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.277037 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.277396 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.277594 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.279419 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.279699 1157416 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.279719 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:54:58.279740 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.282813 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283168 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.283198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283319 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.283505 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.283643 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.283826 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.433881 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:54:58.466338 1157416 node_ready.go:35] waiting up to 6m0s for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485186 1157416 node_ready.go:49] node "no-preload-537236" has status "Ready":"True"
	I0318 13:54:58.485217 1157416 node_ready.go:38] duration metric: took 18.833477ms for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485230 1157416 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:58.527030 1157416 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545133 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.545175 1157416 pod_ready.go:81] duration metric: took 18.11215ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545191 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560108 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.560144 1157416 pod_ready.go:81] duration metric: took 14.943161ms for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560159 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.562894 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:54:58.562924 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:54:58.572477 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.572510 1157416 pod_ready.go:81] duration metric: took 12.342242ms for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.572523 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.594618 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.597140 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.644132 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:54:58.644166 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:54:58.734467 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:58.734499 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:54:58.760623 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:59.005259 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005305 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005668 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005692 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.005704 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005713 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005981 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005996 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.006028 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.020654 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.020682 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.022812 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.022814 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.022850 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.979647 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.382455448s)
	I0318 13:54:59.979723 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.979743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980124 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980223 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.980258 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.980281 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.980354 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980675 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980756 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.982424 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.270401 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.509719085s)
	I0318 13:55:00.270464 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.270481 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.272779 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.272794 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.272817 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.272828 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.272837 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.274705 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.274734 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.274759 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.274789 1157416 addons.go:470] Verifying addon metrics-server=true in "no-preload-537236"
	I0318 13:55:00.276931 1157416 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 13:55:00.278586 1157416 addons.go:505] duration metric: took 2.086117916s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 13:55:00.607578 1157416 pod_ready.go:92] pod "kube-proxy-6c4c5" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.607607 1157416 pod_ready.go:81] duration metric: took 2.035076209s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.607620 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626505 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.626531 1157416 pod_ready.go:81] duration metric: took 18.904572ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626540 1157416 pod_ready.go:38] duration metric: took 2.141296876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:00.626556 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:00.626612 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:00.677379 1157416 api_server.go:72] duration metric: took 2.484994048s to wait for apiserver process to appear ...
	I0318 13:55:00.677406 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:00.677426 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:55:00.694161 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:55:00.696445 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:55:00.696479 1157416 api_server.go:131] duration metric: took 19.065082ms to wait for apiserver health ...
	I0318 13:55:00.696492 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:00.707383 1157416 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:00.707417 1157416 system_pods.go:61] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:00.707421 1157416 system_pods.go:61] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:00.707425 1157416 system_pods.go:61] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:00.707429 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:00.707432 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:00.707435 1157416 system_pods.go:61] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:00.707438 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:00.707445 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:00.707450 1157416 system_pods.go:61] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:00.707459 1157416 system_pods.go:74] duration metric: took 10.96036ms to wait for pod list to return data ...
	I0318 13:55:00.707467 1157416 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:00.870267 1157416 default_sa.go:45] found service account: "default"
	I0318 13:55:00.870299 1157416 default_sa.go:55] duration metric: took 162.825175ms for default service account to be created ...
	I0318 13:55:00.870310 1157416 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:01.073950 1157416 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:01.073985 1157416 system_pods.go:89] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:01.073992 1157416 system_pods.go:89] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:01.073998 1157416 system_pods.go:89] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:01.074004 1157416 system_pods.go:89] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:01.074010 1157416 system_pods.go:89] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:01.074017 1157416 system_pods.go:89] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:01.074035 1157416 system_pods.go:89] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:01.074055 1157416 system_pods.go:89] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:01.074069 1157416 system_pods.go:89] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:01.074085 1157416 system_pods.go:126] duration metric: took 203.766894ms to wait for k8s-apps to be running ...
	I0318 13:55:01.074100 1157416 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:01.074152 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:01.091165 1157416 system_svc.go:56] duration metric: took 17.056217ms WaitForService to wait for kubelet
	I0318 13:55:01.091195 1157416 kubeadm.go:576] duration metric: took 2.898817514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:01.091224 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:01.270664 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:01.270724 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:01.270737 1157416 node_conditions.go:105] duration metric: took 179.506857ms to run NodePressure ...
	I0318 13:55:01.270750 1157416 start.go:240] waiting for startup goroutines ...
	I0318 13:55:01.270758 1157416 start.go:245] waiting for cluster config update ...
	I0318 13:55:01.270769 1157416 start.go:254] writing updated cluster config ...
	I0318 13:55:01.271069 1157416 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:01.325353 1157416 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 13:55:01.327367 1157416 out.go:177] * Done! kubectl is now configured to use "no-preload-537236" cluster and "default" namespace by default
	I0318 13:55:03.715412 1157887 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.413874479s)
	I0318 13:55:03.715519 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:03.732767 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:03.743375 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:03.753393 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:03.753414 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:03.753457 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:55:03.763226 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:03.763289 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:03.774001 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:55:03.783943 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:03.783991 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:03.794580 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.803881 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:03.803921 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.813709 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:55:03.823096 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:03.823138 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:03.832790 1157887 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:03.891459 1157887 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:03.891672 1157887 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:04.056923 1157887 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:04.057055 1157887 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:04.057197 1157887 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:04.312932 1157887 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:04.314955 1157887 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:04.315063 1157887 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:04.315156 1157887 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:04.315286 1157887 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:04.315388 1157887 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:04.315490 1157887 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:04.315568 1157887 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:04.315668 1157887 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:04.315743 1157887 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:04.315844 1157887 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:04.315969 1157887 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:04.316034 1157887 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:04.316108 1157887 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:04.643155 1157887 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:04.927731 1157887 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:05.058875 1157887 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:05.221520 1157887 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:05.221985 1157887 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:05.224297 1157887 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:05.226200 1157887 out.go:204]   - Booting up control plane ...
	I0318 13:55:05.226326 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:05.226425 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:05.226520 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:05.244878 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:05.245461 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:05.245531 1157887 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:05.388215 1157887 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:11.393083 1157887 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004356 seconds
	I0318 13:55:11.393511 1157887 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:11.412586 1157887 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:11.939563 1157887 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:11.939844 1157887 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-569210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:12.457349 1157887 kubeadm.go:309] [bootstrap-token] Using token: z44dyw.tsw47dmn862zavdi
	I0318 13:55:12.458855 1157887 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:12.459037 1157887 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:12.466850 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:12.482822 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:12.488920 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:12.496947 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:12.507954 1157887 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:12.535337 1157887 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:12.763814 1157887 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:12.877248 1157887 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:12.878047 1157887 kubeadm.go:309] 
	I0318 13:55:12.878159 1157887 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:12.878183 1157887 kubeadm.go:309] 
	I0318 13:55:12.878291 1157887 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:12.878301 1157887 kubeadm.go:309] 
	I0318 13:55:12.878334 1157887 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:12.878432 1157887 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:12.878519 1157887 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:12.878531 1157887 kubeadm.go:309] 
	I0318 13:55:12.878603 1157887 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:12.878615 1157887 kubeadm.go:309] 
	I0318 13:55:12.878690 1157887 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:12.878703 1157887 kubeadm.go:309] 
	I0318 13:55:12.878762 1157887 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:12.878858 1157887 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:12.878974 1157887 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:12.878985 1157887 kubeadm.go:309] 
	I0318 13:55:12.879087 1157887 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:12.879164 1157887 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:12.879171 1157887 kubeadm.go:309] 
	I0318 13:55:12.879275 1157887 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879410 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:12.879464 1157887 kubeadm.go:309] 	--control-plane 
	I0318 13:55:12.879484 1157887 kubeadm.go:309] 
	I0318 13:55:12.879576 1157887 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:12.879586 1157887 kubeadm.go:309] 
	I0318 13:55:12.879719 1157887 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879871 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:12.883383 1157887 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:12.883432 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:55:12.883447 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:12.885248 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:12.886708 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:12.929444 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:13.043416 1157887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:13.043541 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.043567 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-569210 minikube.k8s.io/updated_at=2024_03_18T13_55_13_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=default-k8s-diff-port-569210 minikube.k8s.io/primary=true
	I0318 13:55:13.064927 1157887 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:13.286093 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.786780 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.286728 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.786442 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.287103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.786443 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.287138 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.113672 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:16.113963 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:16.787069 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.286490 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.786317 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.286840 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.786872 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.286911 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.786554 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.286216 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.786282 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.286590 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.787103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.286966 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.786928 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.286275 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.786464 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.286791 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.787028 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.938400 1157887 kubeadm.go:1107] duration metric: took 11.894943444s to wait for elevateKubeSystemPrivileges
	W0318 13:55:24.938440 1157887 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:24.938448 1157887 kubeadm.go:393] duration metric: took 5m12.933246555s to StartCluster
	I0318 13:55:24.938470 1157887 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.938621 1157887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:24.940984 1157887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.941286 1157887 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:24.943151 1157887 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:24.941329 1157887 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:24.941469 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:24.944770 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:24.944780 1157887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944830 1157887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.944845 1157887 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:24.944846 1157887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944851 1157887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944880 1157887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:24.944888 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	W0318 13:55:24.944897 1157887 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:24.944927 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.944881 1157887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-569210"
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945350 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945375 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945400 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945460 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.963173 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0318 13:55:24.963820 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.964695 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.964725 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.965120 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.965696 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.965735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.965976 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0318 13:55:24.966207 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0318 13:55:24.966502 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.966598 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.967058 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967062 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967083 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967100 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967467 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967603 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967671 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.968107 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.968146 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.971673 1157887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.971696 1157887 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:24.971729 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.972091 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.972129 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.986041 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0318 13:55:24.986481 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.986989 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.987009 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.987352 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.987605 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0318 13:55:24.987613 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.988061 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.988481 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.988499 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.988904 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.989082 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.989785 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.992033 1157887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:24.990673 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.991225 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0318 13:55:24.993532 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:24.993557 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:24.993587 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.995449 1157887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:24.994077 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.996749 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997153 1157887 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:24.997171 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:24.997191 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.997431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:24.997463 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:24.997466 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997665 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.997684 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.997746 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:24.998183 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.998273 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:24.998497 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:24.998701 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.998735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.999951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.000454 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000676 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.000865 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.001021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.001160 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.016442 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
	I0318 13:55:25.016827 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:25.017300 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:25.017328 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:25.017686 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:25.017906 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:25.019440 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:25.019694 1157887 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.019711 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:25.019731 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:25.022079 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022370 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.022398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022497 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.022645 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.022762 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.022937 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.188474 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:25.208092 1157887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218757 1157887 node_ready.go:49] node "default-k8s-diff-port-569210" has status "Ready":"True"
	I0318 13:55:25.218789 1157887 node_ready.go:38] duration metric: took 10.658955ms for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218829 1157887 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:25.224381 1157887 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235938 1157887 pod_ready.go:92] pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.235962 1157887 pod_ready.go:81] duration metric: took 11.550686ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235971 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.242985 1157887 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.243014 1157887 pod_ready.go:81] duration metric: took 7.034818ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.243027 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255777 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.255801 1157887 pod_ready.go:81] duration metric: took 12.766918ms for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255811 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.301824 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:25.301846 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:25.330301 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:25.348473 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:25.348500 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:25.365746 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.398074 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:25.398099 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:25.423951 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:27.292115 1157887 pod_ready.go:92] pod "kube-proxy-2pp8z" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.292202 1157887 pod_ready.go:81] duration metric: took 2.036383518s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.292227 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299705 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.299732 1157887 pod_ready.go:81] duration metric: took 7.486631ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299743 1157887 pod_ready.go:38] duration metric: took 2.08090143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:27.299762 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:27.299824 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:27.706241 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.375885124s)
	I0318 13:55:27.706314 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706326 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706330 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.340547601s)
	I0318 13:55:27.706377 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706392 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706630 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.282631636s)
	I0318 13:55:27.706900 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.706828 1157887 api_server.go:72] duration metric: took 2.765497711s to wait for apiserver process to appear ...
	I0318 13:55:27.706940 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:27.706879 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.706979 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.706996 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707024 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706916 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706985 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:55:27.707343 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707366 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707372 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.707405 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707417 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707426 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707455 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.707682 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707696 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707706 1157887 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:27.708614 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.708664 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.708694 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.708783 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.709092 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.709151 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.709175 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.718110 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:55:27.719497 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:27.719518 1157887 api_server.go:131] duration metric: took 12.563372ms to wait for apiserver health ...
	I0318 13:55:27.719526 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:27.739882 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.739914 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.740263 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.740296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.740318 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.742102 1157887 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0318 13:55:27.368024 1157263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.223901258s)
	I0318 13:55:27.368118 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.388474 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:27.402749 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:27.417121 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:27.417184 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:27.417235 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:27.429920 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:27.429997 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:27.442468 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:27.454842 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:27.454913 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:27.467911 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.480201 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:27.480272 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.496430 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:27.512020 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:27.512092 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:27.528102 1157263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:27.601072 1157263 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:27.601235 1157263 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:27.796445 1157263 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:27.796574 1157263 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:27.796730 1157263 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:28.079026 1157263 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:27.743429 1157887 addons.go:505] duration metric: took 2.802098895s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0318 13:55:27.744694 1157887 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:27.744727 1157887 system_pods.go:61] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.744733 1157887 system_pods.go:61] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.744738 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.744744 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.744750 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.744756 1157887 system_pods.go:61] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.744764 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.744777 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.744783 1157887 system_pods.go:61] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending
	I0318 13:55:27.744797 1157887 system_pods.go:74] duration metric: took 25.264322ms to wait for pod list to return data ...
	I0318 13:55:27.744810 1157887 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:27.755398 1157887 default_sa.go:45] found service account: "default"
	I0318 13:55:27.755427 1157887 default_sa.go:55] duration metric: took 10.607153ms for default service account to be created ...
	I0318 13:55:27.755439 1157887 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:27.815477 1157887 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:27.815507 1157887 system_pods.go:89] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.815512 1157887 system_pods.go:89] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.815517 1157887 system_pods.go:89] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.815521 1157887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.815526 1157887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.815529 1157887 system_pods.go:89] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.815533 1157887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.815540 1157887 system_pods.go:89] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.815546 1157887 system_pods.go:89] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:27.815557 1157887 system_pods.go:126] duration metric: took 60.111832ms to wait for k8s-apps to be running ...
	I0318 13:55:27.815566 1157887 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:27.815610 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.834266 1157887 system_svc.go:56] duration metric: took 18.687554ms WaitForService to wait for kubelet
	I0318 13:55:27.834304 1157887 kubeadm.go:576] duration metric: took 2.892974502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:27.834345 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:28.013031 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:28.013095 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:28.013148 1157887 node_conditions.go:105] duration metric: took 178.79502ms to run NodePressure ...
	I0318 13:55:28.013169 1157887 start.go:240] waiting for startup goroutines ...
	I0318 13:55:28.013181 1157887 start.go:245] waiting for cluster config update ...
	I0318 13:55:28.013199 1157887 start.go:254] writing updated cluster config ...
	I0318 13:55:28.013519 1157887 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:28.092810 1157887 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:28.095783 1157887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-569210" cluster and "default" namespace by default
	I0318 13:55:28.080939 1157263 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:28.081056 1157263 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:28.081145 1157263 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:28.081249 1157263 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:28.082078 1157263 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:28.082860 1157263 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:28.083397 1157263 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:28.084597 1157263 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:28.084941 1157263 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:28.085603 1157263 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:28.086461 1157263 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:28.087265 1157263 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:28.087343 1157263 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:28.348996 1157263 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:28.516513 1157263 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:28.585513 1157263 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:28.817150 1157263 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:28.817900 1157263 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:28.820280 1157263 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:28.822114 1157263 out.go:204]   - Booting up control plane ...
	I0318 13:55:28.822217 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:28.822811 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:28.825310 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:28.845906 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:28.847013 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:28.847069 1157263 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:28.992421 1157263 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:35.495384 1157263 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502688 seconds
	I0318 13:55:35.495578 1157263 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:35.517088 1157263 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:36.049915 1157263 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:36.050163 1157263 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-173036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:36.571450 1157263 kubeadm.go:309] [bootstrap-token] Using token: a1fi6l.v36l7wrnalucsepl
	I0318 13:55:36.573263 1157263 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:36.573448 1157263 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:36.581322 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:36.594853 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:36.598538 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:36.602430 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:36.605534 1157263 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:36.621332 1157263 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:36.865518 1157263 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:36.990015 1157263 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:36.991079 1157263 kubeadm.go:309] 
	I0318 13:55:36.991168 1157263 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:36.991181 1157263 kubeadm.go:309] 
	I0318 13:55:36.991288 1157263 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:36.991299 1157263 kubeadm.go:309] 
	I0318 13:55:36.991320 1157263 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:36.991395 1157263 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:36.991475 1157263 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:36.991494 1157263 kubeadm.go:309] 
	I0318 13:55:36.991572 1157263 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:36.991581 1157263 kubeadm.go:309] 
	I0318 13:55:36.991646 1157263 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:36.991658 1157263 kubeadm.go:309] 
	I0318 13:55:36.991737 1157263 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:36.991839 1157263 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:36.991954 1157263 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:36.991966 1157263 kubeadm.go:309] 
	I0318 13:55:36.992073 1157263 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:36.992174 1157263 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:36.992186 1157263 kubeadm.go:309] 
	I0318 13:55:36.992304 1157263 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992477 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:36.992522 1157263 kubeadm.go:309] 	--control-plane 
	I0318 13:55:36.992532 1157263 kubeadm.go:309] 
	I0318 13:55:36.992642 1157263 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:36.992656 1157263 kubeadm.go:309] 
	I0318 13:55:36.992769 1157263 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992922 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:36.994542 1157263 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:36.994648 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:55:36.994660 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:36.996526 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:36.997929 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:37.047757 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:37.075078 1157263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:37.075167 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.075199 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-173036 minikube.k8s.io/updated_at=2024_03_18T13_55_37_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=embed-certs-173036 minikube.k8s.io/primary=true
	I0318 13:55:37.236857 1157263 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:37.422453 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.922622 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.423527 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.922743 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.422721 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.923438 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.422599 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.923170 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.422812 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.922526 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.422594 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.922835 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.423479 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.923114 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.422672 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.922883 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.422863 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.922770 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.423473 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.923125 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.423378 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.923366 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.422566 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.923231 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.422505 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.554542 1157263 kubeadm.go:1107] duration metric: took 12.479441091s to wait for elevateKubeSystemPrivileges
	W0318 13:55:49.554590 1157263 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:49.554602 1157263 kubeadm.go:393] duration metric: took 5m13.226983757s to StartCluster
	I0318 13:55:49.554626 1157263 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.554778 1157263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:49.556962 1157263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.557273 1157263 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:49.558774 1157263 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:49.557321 1157263 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:49.557488 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:49.560195 1157263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-173036"
	I0318 13:55:49.560201 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:49.560211 1157263 addons.go:69] Setting metrics-server=true in profile "embed-certs-173036"
	I0318 13:55:49.560237 1157263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-173036"
	I0318 13:55:49.560247 1157263 addons.go:234] Setting addon metrics-server=true in "embed-certs-173036"
	W0318 13:55:49.560254 1157263 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:49.560201 1157263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-173036"
	I0318 13:55:49.560282 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560302 1157263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-173036"
	W0318 13:55:49.560317 1157263 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:49.560388 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560644 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560676 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560678 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560716 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560777 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560803 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.577682 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0318 13:55:49.577714 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0318 13:55:49.578101 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0318 13:55:49.578261 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578285 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578493 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578880 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578907 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.578882 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578923 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579013 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.579036 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579302 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579333 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579538 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.579598 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579914 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.579955 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.580203 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.580238 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.583587 1157263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-173036"
	W0318 13:55:49.583610 1157263 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:49.583641 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.584009 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.584040 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.596862 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0318 13:55:49.597356 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.597859 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.598026 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.598110 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0318 13:55:49.598635 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599310 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.599331 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.599405 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0318 13:55:49.599732 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599874 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.600120 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.600135 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.600197 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.600439 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.601019 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.601052 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.602172 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.604115 1157263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:49.606034 1157263 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.606049 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:49.606065 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.603277 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.606323 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.608600 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.610213 1157263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:49.611511 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:49.611531 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:49.611545 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.609758 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.611598 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.611613 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.610550 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.611727 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.611868 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.611991 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.614689 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.615322 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.615403 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615531 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.615672 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.615773 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.620257 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0318 13:55:49.620653 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.621225 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.621243 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.621610 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.621790 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.623303 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.623566 1157263 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:49.623580 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:49.623594 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.626325 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.626733 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.626755 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.627028 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.627196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.627335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.627441 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.791524 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:49.847829 1157263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860595 1157263 node_ready.go:49] node "embed-certs-173036" has status "Ready":"True"
	I0318 13:55:49.860621 1157263 node_ready.go:38] duration metric: took 12.757412ms for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860631 1157263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:49.870524 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:49.917170 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:49.917197 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:49.965845 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:49.965871 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:49.969600 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.982887 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:50.023768 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:50.023795 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:50.139120 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:51.877589 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-ft594" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:51.877618 1157263 pod_ready.go:81] duration metric: took 2.007066644s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:51.877634 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.007908 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.02498147s)
	I0318 13:55:52.007966 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.007979 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008318 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008378 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008383 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.008408 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.008427 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008713 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008853 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.009491 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.039858476s)
	I0318 13:55:52.009567 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.009595 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010239 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010242 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010276 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.010289 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.010301 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010553 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010568 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010578 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.026035 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.026056 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.026364 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.026385 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.202596 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.063427726s)
	I0318 13:55:52.202663 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.202686 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.202999 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203021 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203032 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.203040 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.203321 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203338 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203352 1157263 addons.go:470] Verifying addon metrics-server=true in "embed-certs-173036"
	I0318 13:55:52.205372 1157263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 13:55:52.207184 1157263 addons.go:505] duration metric: took 2.649872416s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 13:55:52.391839 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.391878 1157263 pod_ready.go:81] duration metric: took 514.235543ms for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.391891 1157263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398044 1157263 pod_ready.go:92] pod "etcd-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.398075 1157263 pod_ready.go:81] duration metric: took 6.176672ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398091 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403790 1157263 pod_ready.go:92] pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.403809 1157263 pod_ready.go:81] duration metric: took 5.70927ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403817 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414956 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.414976 1157263 pod_ready.go:81] duration metric: took 11.153442ms for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414986 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674125 1157263 pod_ready.go:92] pod "kube-proxy-lp9mc" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.674151 1157263 pod_ready.go:81] duration metric: took 259.158776ms for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674160 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075385 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:53.075420 1157263 pod_ready.go:81] duration metric: took 401.251175ms for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075432 1157263 pod_ready.go:38] duration metric: took 3.214790175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:53.075452 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:53.075523 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:53.092916 1157263 api_server.go:72] duration metric: took 3.53560403s to wait for apiserver process to appear ...
	I0318 13:55:53.092948 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:53.093027 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:55:53.098715 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:55:53.100073 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:53.100102 1157263 api_server.go:131] duration metric: took 7.134408ms to wait for apiserver health ...
	I0318 13:55:53.100113 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:53.278961 1157263 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:53.278993 1157263 system_pods.go:61] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.278998 1157263 system_pods.go:61] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.279002 1157263 system_pods.go:61] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.279005 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.279010 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.279013 1157263 system_pods.go:61] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.279017 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.279023 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.279026 1157263 system_pods.go:61] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.279037 1157263 system_pods.go:74] duration metric: took 178.915393ms to wait for pod list to return data ...
	I0318 13:55:53.279047 1157263 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:53.475094 1157263 default_sa.go:45] found service account: "default"
	I0318 13:55:53.475123 1157263 default_sa.go:55] duration metric: took 196.069593ms for default service account to be created ...
	I0318 13:55:53.475133 1157263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:53.678384 1157263 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:53.678413 1157263 system_pods.go:89] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.678418 1157263 system_pods.go:89] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.678422 1157263 system_pods.go:89] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.678427 1157263 system_pods.go:89] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.678431 1157263 system_pods.go:89] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.678436 1157263 system_pods.go:89] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.678439 1157263 system_pods.go:89] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.678447 1157263 system_pods.go:89] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.678455 1157263 system_pods.go:89] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.678464 1157263 system_pods.go:126] duration metric: took 203.32588ms to wait for k8s-apps to be running ...
	I0318 13:55:53.678473 1157263 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:53.678531 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:53.698244 1157263 system_svc.go:56] duration metric: took 19.758793ms WaitForService to wait for kubelet
	I0318 13:55:53.698279 1157263 kubeadm.go:576] duration metric: took 4.140974066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:53.698307 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:53.876137 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:53.876162 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:53.876173 1157263 node_conditions.go:105] duration metric: took 177.861272ms to run NodePressure ...
	I0318 13:55:53.876184 1157263 start.go:240] waiting for startup goroutines ...
	I0318 13:55:53.876191 1157263 start.go:245] waiting for cluster config update ...
	I0318 13:55:53.876202 1157263 start.go:254] writing updated cluster config ...
	I0318 13:55:53.876907 1157263 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:53.931596 1157263 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:53.933499 1157263 out.go:177] * Done! kubectl is now configured to use "embed-certs-173036" cluster and "default" namespace by default
	I0318 13:55:56.115397 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:56.115674 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:56.115714 1157708 kubeadm.go:309] 
	I0318 13:55:56.115782 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:55:56.115840 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:55:56.115849 1157708 kubeadm.go:309] 
	I0318 13:55:56.115908 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:55:56.115979 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:55:56.116102 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:55:56.116112 1157708 kubeadm.go:309] 
	I0318 13:55:56.116242 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:55:56.116289 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:55:56.116349 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:55:56.116370 1157708 kubeadm.go:309] 
	I0318 13:55:56.116506 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:55:56.116645 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:55:56.116665 1157708 kubeadm.go:309] 
	I0318 13:55:56.116804 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:55:56.116897 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:55:56.117005 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:55:56.117094 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:55:56.117110 1157708 kubeadm.go:309] 
	I0318 13:55:56.117680 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:56.117813 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:55:56.117934 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 13:55:56.118052 1157708 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 13:55:56.118124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:55:57.920938 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.802776126s)
	I0318 13:55:57.921031 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:57.939226 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:57.952304 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:57.952342 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:57.952404 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:57.964632 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:57.964695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:57.977306 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:57.989728 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:57.989790 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:58.001661 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.013078 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:58.013160 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.024891 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:58.036171 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:58.036225 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:58.048156 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:58.128356 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:55:58.128445 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:58.297704 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:58.297897 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:58.298048 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:58.515521 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:58.517569 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:58.517679 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:58.517760 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:58.517830 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:58.517908 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:58.517980 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:58.518047 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:58.518280 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:58.519078 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:58.520081 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:58.521268 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:58.521861 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:58.521936 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:58.762418 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:58.999746 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:59.214448 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:59.402662 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:59.421555 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:59.423151 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:59.423233 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:59.560412 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:59.563125 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:55:59.563274 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:59.571364 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:59.572936 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:59.573987 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:59.586689 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:56:39.588627 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:56:39.588942 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:39.589128 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:44.589564 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:44.589852 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:54.590311 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:54.590619 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:14.591571 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:14.591866 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594170 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:54.594433 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594448 1157708 kubeadm.go:309] 
	I0318 13:57:54.594490 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:57:54.594540 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:57:54.594549 1157708 kubeadm.go:309] 
	I0318 13:57:54.594594 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:57:54.594641 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:57:54.594800 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:57:54.594811 1157708 kubeadm.go:309] 
	I0318 13:57:54.594950 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:57:54.595000 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:57:54.595046 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:57:54.595056 1157708 kubeadm.go:309] 
	I0318 13:57:54.595163 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:57:54.595297 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:57:54.595312 1157708 kubeadm.go:309] 
	I0318 13:57:54.595471 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:57:54.595605 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:57:54.595716 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:57:54.595812 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:57:54.595827 1157708 kubeadm.go:309] 
	I0318 13:57:54.596636 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:57:54.596805 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:57:54.596972 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 13:57:54.597014 1157708 kubeadm.go:393] duration metric: took 8m1.551231902s to StartCluster
	I0318 13:57:54.597076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:57:54.597174 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:57:54.649451 1157708 cri.go:89] found id: ""
	I0318 13:57:54.649484 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.649496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:57:54.649506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:57:54.649577 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:57:54.692278 1157708 cri.go:89] found id: ""
	I0318 13:57:54.692317 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.692339 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:57:54.692349 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:57:54.692427 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:57:54.731034 1157708 cri.go:89] found id: ""
	I0318 13:57:54.731062 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.731071 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:57:54.731077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:57:54.731135 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:57:54.769883 1157708 cri.go:89] found id: ""
	I0318 13:57:54.769913 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.769923 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:57:54.769931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:57:54.769996 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:57:54.808620 1157708 cri.go:89] found id: ""
	I0318 13:57:54.808648 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.808656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:57:54.808661 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:57:54.808715 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:57:54.849207 1157708 cri.go:89] found id: ""
	I0318 13:57:54.849245 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.849256 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:57:54.849264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:57:54.849334 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:57:54.918479 1157708 cri.go:89] found id: ""
	I0318 13:57:54.918508 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.918520 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:57:54.918528 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:57:54.918597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:57:54.958828 1157708 cri.go:89] found id: ""
	I0318 13:57:54.958861 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.958871 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:57:54.958887 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:57:54.958906 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:57:55.078045 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:57:55.078092 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:57:55.123043 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:57:55.123077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:57:55.180480 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:57:55.180518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:57:55.197264 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:57:55.197316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:57:55.291264 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0318 13:57:55.291325 1157708 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 13:57:55.291395 1157708 out.go:239] * 
	W0318 13:57:55.291477 1157708 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.291502 1157708 out.go:239] * 
	W0318 13:57:55.292511 1157708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:55.295566 1157708 out.go:177] 
	W0318 13:57:55.296840 1157708 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.296903 1157708 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 13:57:55.296941 1157708 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 13:57:55.298417 1157708 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.454714625Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=247e150a-929a-4575-bfdb-c249af555275 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.456712778Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ec833f9e-c254-4804-b629-c03788c37b1e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.457164778Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770643457138735,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ec833f9e-c254-4804-b629-c03788c37b1e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.457921065Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e2fa8c8-0235-4660-9bcb-23b7a734f717 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.457998927Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e2fa8c8-0235-4660-9bcb-23b7a734f717 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.458195074Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5eff0b76358ea55983537e218c15405bb9546598fa0378d56c1acb15c091de1,PodSandboxId:746ec33a96d0e612980c6b0b0d6c2df8b6e08e74d0a81969779164cd02a197fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770100524048224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f02049f6-a08f-45ac-b285-cbdbb260ab59,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6bc6d5,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad11334bb2cb49a5b236a15b70d540d4a984767f4fd3ce605df4892fea682805,PodSandboxId:9f64912cb81a81ba68bd1f0840ce94e2543f7d9a7c62637f43bf344aae880e8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099602070811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-grqdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce5620-c97b-4ecd-baba-c5fc840b8127,},Annotations:map[string]string{io.kubernetes.container.hash: 7ddacc4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:529ec1988da2e52b372b7a67970abd9c5eaebb01e10450040a5b25b82a337fc9,PodSandboxId:94429a766e396734e972bc6c3bf2803e6db513cf810a6812ee87be60fe28a4a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099435758608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bhh4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
6f9b9a-2f7e-46bc-9224-57dc077e444d,},Annotations:map[string]string{io.kubernetes.container.hash: 5c214c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dcf9e1b53ce425f8b6695e6e1c5691bae3875e174faf197d4825fd5b78e2f87,PodSandboxId:2f6154cca24c71b1163177715948c27ba0ac043b9f210321ede1fd2704422399,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710770099280398184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dd6fcfc-7510-418d-baab-a0ec364391c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8e11475c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62109d6bfecf0c909738de06ab89c1fb9737a2736f389ecd3c3d8718ef53df0,PodSandboxId:3f0397f06b979fe2fe0a0e28151f59b286486fdcfa25722148590b84ca493234,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770079753027895,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a025326bb6180b2a8f6c316293e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 86d13242,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9a0041888c410f1f430fe9a078ee274b6be04c226035bd824ee2ab3f6dbc4a,PodSandboxId:bfeb40b64e804c9d6669b610714b5178e4c190f9875d9348c4b16c727247dc12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770079723179522,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8632dce66779f857721d3ec20f67a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef410e9166d54805c6253404b57accb39b524473e6d16d7c0a84754e5cb7fa1,PodSandboxId:9892446bae63666a1037572820b7a57d008e4435cb26ce0777608b6ed81df88e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770079752133032,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c376014bcfa6838e65b773f219f3fb58,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a992a5bf3001600ca4f876d0e19f510d8fb11c6a79d095665f3c25866f3c882a,PodSandboxId:04c4dcff6c197e27be359861a79cb96f6f471c32cdaae1c93c7ddd5969fda6c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770079609134834,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f3699be8bde2669bbf0e03e1ab70872,},Annotations:map[string]string{io.kubernetes.container.hash: 18df3f70,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e2fa8c8-0235-4660-9bcb-23b7a734f717 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.504254978Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=1ffc943e-5880-4686-8b28-3ad3af7a4632 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.504618702Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:e9e2602cbec20cd8d04e2371717b4f0a6c36f3522e976d7ecd567554c20211f6,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-tkq6h,Uid:14e262de-fd94-4888-96ab-75823109c8c2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710770100419799048,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-tkq6h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 14e262de-fd94-4888-96ab-75823109c8c2,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:55:00.105164568Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:746ec33a96d0e612980c6b0b0d6c2df8b6e08e74d0a81969779164cd02a197fb,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f02049f6-a08f-45ac-b285-cbdbb260ab59,Na
mespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710770100283649377,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f02049f6-a08f-45ac-b285-cbdbb260ab59,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volu
mes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-18T13:54:59.975633885Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9f64912cb81a81ba68bd1f0840ce94e2543f7d9a7c62637f43bf344aae880e8c,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-grqdt,Uid:f4ce5620-c97b-4ecd-baba-c5fc840b8127,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710770098978122377,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-grqdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce5620-c97b-4ecd-baba-c5fc840b8127,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:54:58.649287697Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:94429a766e396734e972bc6c3bf2803e6db513cf810a6812ee87be60fe28a4a4,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-bhh4k,Uid:6d6f9b9a-2f7e-46bc-
9224-57dc077e444d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710770098925330630,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-bhh4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d6f9b9a-2f7e-46bc-9224-57dc077e444d,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:54:58.617957134Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2f6154cca24c71b1163177715948c27ba0ac043b9f210321ede1fd2704422399,Metadata:&PodSandboxMetadata{Name:kube-proxy-6c4c5,Uid:2dd6fcfc-7510-418d-baab-a0ec364391c1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710770098846312221,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-6c4c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dd6fcfc-7510-418d-baab-a0ec364391c1,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-18T13:54:58.525786862Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3f0397f06b979fe2fe0a0e28151f59b286486fdcfa25722148590b84ca493234,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-537236,Uid:b1a025326bb6180b2a8f6c316293e5ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710770079481515630,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a025326bb6180b2a8f6c316293e5ad,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.7:2379,kubernetes.io/config.hash: b1a025326bb6180b2a8f6c316293e5ad,kubernetes.io/config.seen: 2024-03-18T13:54:39.016053522Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bfeb40b64e804c9d6669b610714b5178e4c190f9875d9348c4b16c727247dc12,Metad
ata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-537236,Uid:8632dce66779f857721d3ec20f67a3e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710770079478591681,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8632dce66779f857721d3ec20f67a3e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8632dce66779f857721d3ec20f67a3e4,kubernetes.io/config.seen: 2024-03-18T13:54:39.016051591Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9892446bae63666a1037572820b7a57d008e4435cb26ce0777608b6ed81df88e,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-537236,Uid:c376014bcfa6838e65b773f219f3fb58,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710770079473728598,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: PO
D,io.kubernetes.pod.name: kube-scheduler-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c376014bcfa6838e65b773f219f3fb58,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: c376014bcfa6838e65b773f219f3fb58,kubernetes.io/config.seen: 2024-03-18T13:54:39.016052560Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:04c4dcff6c197e27be359861a79cb96f6f471c32cdaae1c93c7ddd5969fda6c9,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-537236,Uid:3f3699be8bde2669bbf0e03e1ab70872,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710770079443577578,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f3699be8bde2669bbf0e03e1ab70872,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.7:8443,ku
bernetes.io/config.hash: 3f3699be8bde2669bbf0e03e1ab70872,kubernetes.io/config.seen: 2024-03-18T13:54:39.016048155Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=1ffc943e-5880-4686-8b28-3ad3af7a4632 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.505716410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=208c3d26-ff8a-4779-8848-1cc38b453de2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.505783889Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=208c3d26-ff8a-4779-8848-1cc38b453de2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.506619925Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5eff0b76358ea55983537e218c15405bb9546598fa0378d56c1acb15c091de1,PodSandboxId:746ec33a96d0e612980c6b0b0d6c2df8b6e08e74d0a81969779164cd02a197fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770100524048224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f02049f6-a08f-45ac-b285-cbdbb260ab59,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6bc6d5,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad11334bb2cb49a5b236a15b70d540d4a984767f4fd3ce605df4892fea682805,PodSandboxId:9f64912cb81a81ba68bd1f0840ce94e2543f7d9a7c62637f43bf344aae880e8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099602070811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-grqdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce5620-c97b-4ecd-baba-c5fc840b8127,},Annotations:map[string]string{io.kubernetes.container.hash: 7ddacc4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:529ec1988da2e52b372b7a67970abd9c5eaebb01e10450040a5b25b82a337fc9,PodSandboxId:94429a766e396734e972bc6c3bf2803e6db513cf810a6812ee87be60fe28a4a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099435758608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bhh4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
6f9b9a-2f7e-46bc-9224-57dc077e444d,},Annotations:map[string]string{io.kubernetes.container.hash: 5c214c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dcf9e1b53ce425f8b6695e6e1c5691bae3875e174faf197d4825fd5b78e2f87,PodSandboxId:2f6154cca24c71b1163177715948c27ba0ac043b9f210321ede1fd2704422399,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710770099280398184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dd6fcfc-7510-418d-baab-a0ec364391c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8e11475c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62109d6bfecf0c909738de06ab89c1fb9737a2736f389ecd3c3d8718ef53df0,PodSandboxId:3f0397f06b979fe2fe0a0e28151f59b286486fdcfa25722148590b84ca493234,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770079753027895,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a025326bb6180b2a8f6c316293e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 86d13242,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9a0041888c410f1f430fe9a078ee274b6be04c226035bd824ee2ab3f6dbc4a,PodSandboxId:bfeb40b64e804c9d6669b610714b5178e4c190f9875d9348c4b16c727247dc12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770079723179522,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8632dce66779f857721d3ec20f67a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef410e9166d54805c6253404b57accb39b524473e6d16d7c0a84754e5cb7fa1,PodSandboxId:9892446bae63666a1037572820b7a57d008e4435cb26ce0777608b6ed81df88e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770079752133032,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c376014bcfa6838e65b773f219f3fb58,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a992a5bf3001600ca4f876d0e19f510d8fb11c6a79d095665f3c25866f3c882a,PodSandboxId:04c4dcff6c197e27be359861a79cb96f6f471c32cdaae1c93c7ddd5969fda6c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770079609134834,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f3699be8bde2669bbf0e03e1ab70872,},Annotations:map[string]string{io.kubernetes.container.hash: 18df3f70,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=208c3d26-ff8a-4779-8848-1cc38b453de2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.510348560Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=517caee5-43ba-4edd-a321-a0c4550daa44 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.510437981Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=517caee5-43ba-4edd-a321-a0c4550daa44 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.511656690Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5334a876-c805-4e26-a730-898f7de6b90e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.512581463Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770643512546389,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5334a876-c805-4e26-a730-898f7de6b90e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.513195349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d3e04606-b25f-4068-8a5f-3acc3a671137 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.513273618Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d3e04606-b25f-4068-8a5f-3acc3a671137 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.513475164Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5eff0b76358ea55983537e218c15405bb9546598fa0378d56c1acb15c091de1,PodSandboxId:746ec33a96d0e612980c6b0b0d6c2df8b6e08e74d0a81969779164cd02a197fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770100524048224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f02049f6-a08f-45ac-b285-cbdbb260ab59,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6bc6d5,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad11334bb2cb49a5b236a15b70d540d4a984767f4fd3ce605df4892fea682805,PodSandboxId:9f64912cb81a81ba68bd1f0840ce94e2543f7d9a7c62637f43bf344aae880e8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099602070811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-grqdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce5620-c97b-4ecd-baba-c5fc840b8127,},Annotations:map[string]string{io.kubernetes.container.hash: 7ddacc4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:529ec1988da2e52b372b7a67970abd9c5eaebb01e10450040a5b25b82a337fc9,PodSandboxId:94429a766e396734e972bc6c3bf2803e6db513cf810a6812ee87be60fe28a4a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099435758608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bhh4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
6f9b9a-2f7e-46bc-9224-57dc077e444d,},Annotations:map[string]string{io.kubernetes.container.hash: 5c214c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dcf9e1b53ce425f8b6695e6e1c5691bae3875e174faf197d4825fd5b78e2f87,PodSandboxId:2f6154cca24c71b1163177715948c27ba0ac043b9f210321ede1fd2704422399,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710770099280398184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dd6fcfc-7510-418d-baab-a0ec364391c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8e11475c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62109d6bfecf0c909738de06ab89c1fb9737a2736f389ecd3c3d8718ef53df0,PodSandboxId:3f0397f06b979fe2fe0a0e28151f59b286486fdcfa25722148590b84ca493234,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770079753027895,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a025326bb6180b2a8f6c316293e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 86d13242,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9a0041888c410f1f430fe9a078ee274b6be04c226035bd824ee2ab3f6dbc4a,PodSandboxId:bfeb40b64e804c9d6669b610714b5178e4c190f9875d9348c4b16c727247dc12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770079723179522,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8632dce66779f857721d3ec20f67a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef410e9166d54805c6253404b57accb39b524473e6d16d7c0a84754e5cb7fa1,PodSandboxId:9892446bae63666a1037572820b7a57d008e4435cb26ce0777608b6ed81df88e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770079752133032,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c376014bcfa6838e65b773f219f3fb58,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a992a5bf3001600ca4f876d0e19f510d8fb11c6a79d095665f3c25866f3c882a,PodSandboxId:04c4dcff6c197e27be359861a79cb96f6f471c32cdaae1c93c7ddd5969fda6c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770079609134834,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f3699be8bde2669bbf0e03e1ab70872,},Annotations:map[string]string{io.kubernetes.container.hash: 18df3f70,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d3e04606-b25f-4068-8a5f-3acc3a671137 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.550038598Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2ea56ff5-8206-46df-a006-5aa880c6c3d8 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.550141777Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2ea56ff5-8206-46df-a006-5aa880c6c3d8 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.551073492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9def8511-f876-4327-a3a3-f903495da95e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.551597660Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770643551575158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9def8511-f876-4327-a3a3-f903495da95e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.552242395Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=820f4a62-8309-4018-82d4-f11d466aa704 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.552320786Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=820f4a62-8309-4018-82d4-f11d466aa704 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:03 no-preload-537236 crio[701]: time="2024-03-18 14:04:03.552648030Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5eff0b76358ea55983537e218c15405bb9546598fa0378d56c1acb15c091de1,PodSandboxId:746ec33a96d0e612980c6b0b0d6c2df8b6e08e74d0a81969779164cd02a197fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770100524048224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f02049f6-a08f-45ac-b285-cbdbb260ab59,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6bc6d5,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad11334bb2cb49a5b236a15b70d540d4a984767f4fd3ce605df4892fea682805,PodSandboxId:9f64912cb81a81ba68bd1f0840ce94e2543f7d9a7c62637f43bf344aae880e8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099602070811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-grqdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce5620-c97b-4ecd-baba-c5fc840b8127,},Annotations:map[string]string{io.kubernetes.container.hash: 7ddacc4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:529ec1988da2e52b372b7a67970abd9c5eaebb01e10450040a5b25b82a337fc9,PodSandboxId:94429a766e396734e972bc6c3bf2803e6db513cf810a6812ee87be60fe28a4a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099435758608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bhh4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
6f9b9a-2f7e-46bc-9224-57dc077e444d,},Annotations:map[string]string{io.kubernetes.container.hash: 5c214c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dcf9e1b53ce425f8b6695e6e1c5691bae3875e174faf197d4825fd5b78e2f87,PodSandboxId:2f6154cca24c71b1163177715948c27ba0ac043b9f210321ede1fd2704422399,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710770099280398184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dd6fcfc-7510-418d-baab-a0ec364391c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8e11475c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62109d6bfecf0c909738de06ab89c1fb9737a2736f389ecd3c3d8718ef53df0,PodSandboxId:3f0397f06b979fe2fe0a0e28151f59b286486fdcfa25722148590b84ca493234,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770079753027895,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a025326bb6180b2a8f6c316293e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 86d13242,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9a0041888c410f1f430fe9a078ee274b6be04c226035bd824ee2ab3f6dbc4a,PodSandboxId:bfeb40b64e804c9d6669b610714b5178e4c190f9875d9348c4b16c727247dc12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770079723179522,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8632dce66779f857721d3ec20f67a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef410e9166d54805c6253404b57accb39b524473e6d16d7c0a84754e5cb7fa1,PodSandboxId:9892446bae63666a1037572820b7a57d008e4435cb26ce0777608b6ed81df88e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770079752133032,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c376014bcfa6838e65b773f219f3fb58,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a992a5bf3001600ca4f876d0e19f510d8fb11c6a79d095665f3c25866f3c882a,PodSandboxId:04c4dcff6c197e27be359861a79cb96f6f471c32cdaae1c93c7ddd5969fda6c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770079609134834,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f3699be8bde2669bbf0e03e1ab70872,},Annotations:map[string]string{io.kubernetes.container.hash: 18df3f70,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=820f4a62-8309-4018-82d4-f11d466aa704 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5eff0b76358e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   746ec33a96d0e       storage-provisioner
	ad11334bb2cb4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9f64912cb81a8       coredns-76f75df574-grqdt
	529ec1988da2e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   94429a766e396       coredns-76f75df574-bhh4k
	8dcf9e1b53ce4       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   9 minutes ago       Running             kube-proxy                0                   2f6154cca24c7       kube-proxy-6c4c5
	f62109d6bfecf       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   3f0397f06b979       etcd-no-preload-537236
	9ef410e9166d5       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   9 minutes ago       Running             kube-scheduler            2                   9892446bae636       kube-scheduler-no-preload-537236
	3a9a0041888c4       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   9 minutes ago       Running             kube-controller-manager   2                   bfeb40b64e804       kube-controller-manager-no-preload-537236
	a992a5bf30016       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   9 minutes ago       Running             kube-apiserver            2                   04c4dcff6c197       kube-apiserver-no-preload-537236
	
	
	==> coredns [529ec1988da2e52b372b7a67970abd9c5eaebb01e10450040a5b25b82a337fc9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ad11334bb2cb49a5b236a15b70d540d4a984767f4fd3ce605df4892fea682805] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-537236
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-537236
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=no-preload-537236
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_54_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:54:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-537236
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:03:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:00:12 +0000   Mon, 18 Mar 2024 13:54:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:00:12 +0000   Mon, 18 Mar 2024 13:54:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:00:12 +0000   Mon, 18 Mar 2024 13:54:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:00:12 +0000   Mon, 18 Mar 2024 13:54:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    no-preload-537236
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4498343c5af4a83bb2a71cf0a0e9028
	  System UUID:                f4498343-c5af-4a83-bb2a-71cf0a0e9028
	  Boot ID:                    8e4f04ef-176c-4622-aadb-07fd4c5f4b88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-bhh4k                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-76f75df574-grqdt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-no-preload-537236                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-apiserver-no-preload-537236             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-no-preload-537236    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-6c4c5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	  kube-system                 kube-scheduler-no-preload-537236             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 metrics-server-57f55c9bc5-tkq6h              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m3s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m17s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m17s  kubelet          Node no-preload-537236 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m17s  kubelet          Node no-preload-537236 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m17s  kubelet          Node no-preload-537236 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m17s  kubelet          Node no-preload-537236 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m17s  kubelet          Node no-preload-537236 status is now: NodeReady
	  Normal  RegisteredNode           9m6s   node-controller  Node no-preload-537236 event: Registered Node no-preload-537236 in Controller
	
	
	==> dmesg <==
	[  +0.044647] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.560689] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.501153] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.697883] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.165969] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.059729] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070451] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.229963] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.169335] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.284378] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[ +17.046013] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.061314] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.078048] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +5.674403] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.272688] kauditd_printk_skb: 44 callbacks suppressed
	[Mar18 13:50] kauditd_printk_skb: 20 callbacks suppressed
	[Mar18 13:54] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.597632] systemd-fstab-generator[3859]: Ignoring "noauto" option for root device
	[  +4.595978] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.712006] systemd-fstab-generator[4180]: Ignoring "noauto" option for root device
	[ +12.534844] systemd-fstab-generator[4368]: Ignoring "noauto" option for root device
	[  +0.098053] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 13:56] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [f62109d6bfecf0c909738de06ab89c1fb9737a2736f389ecd3c3d8718ef53df0] <==
	{"level":"info","ts":"2024-03-18T13:54:40.247138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b switched to configuration voters=(13490837375279012171)"}
	{"level":"info","ts":"2024-03-18T13:54:40.24728Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3202df3d6e5aadcb","local-member-id":"bb39151d8411994b","added-peer-id":"bb39151d8411994b","added-peer-peer-urls":["https://192.168.39.7:2380"]}
	{"level":"info","ts":"2024-03-18T13:54:40.260562Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T13:54:40.264379Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"bb39151d8411994b","initial-advertise-peer-urls":["https://192.168.39.7:2380"],"listen-peer-urls":["https://192.168.39.7:2380"],"advertise-client-urls":["https://192.168.39.7:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.7:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T13:54:40.264516Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T13:54:40.264779Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-03-18T13:54:40.270978Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.7:2380"}
	{"level":"info","ts":"2024-03-18T13:54:40.372909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T13:54:40.372997Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T13:54:40.373042Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b received MsgPreVoteResp from bb39151d8411994b at term 1"}
	{"level":"info","ts":"2024-03-18T13:54:40.373071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:54:40.373094Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b received MsgVoteResp from bb39151d8411994b at term 2"}
	{"level":"info","ts":"2024-03-18T13:54:40.373125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became leader at term 2"}
	{"level":"info","ts":"2024-03-18T13:54:40.37315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bb39151d8411994b elected leader bb39151d8411994b at term 2"}
	{"level":"info","ts":"2024-03-18T13:54:40.377139Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"bb39151d8411994b","local-member-attributes":"{Name:no-preload-537236 ClientURLs:[https://192.168.39.7:2379]}","request-path":"/0/members/bb39151d8411994b/attributes","cluster-id":"3202df3d6e5aadcb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:54:40.377358Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:54:40.382282Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:54:40.38273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:54:40.385143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:54:40.399146Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T13:54:40.38751Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.7:2379"}
	{"level":"info","ts":"2024-03-18T13:54:40.400579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:54:40.400748Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3202df3d6e5aadcb","local-member-id":"bb39151d8411994b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:54:40.408901Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:54:40.408984Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 14:04:03 up 14 min,  0 users,  load average: 0.04, 0.20, 0.20
	Linux no-preload-537236 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a992a5bf3001600ca4f876d0e19f510d8fb11c6a79d095665f3c25866f3c882a] <==
	I0318 13:58:00.933184       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 13:59:42.405561       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 13:59:42.406092       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0318 13:59:43.407055       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 13:59:43.407260       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 13:59:43.407320       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 13:59:43.407367       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 13:59:43.407489       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 13:59:43.408516       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:00:43.407994       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:00:43.408152       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:00:43.408164       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:00:43.409466       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:00:43.409541       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:00:43.409554       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:02:43.409305       1 handler_proxy.go:93] no RequestInfo found in the context
	W0318 14:02:43.409700       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:02:43.409783       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:02:43.409908       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0318 14:02:43.409798       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:02:43.411126       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3a9a0041888c410f1f430fe9a078ee274b6be04c226035bd824ee2ab3f6dbc4a] <==
	I0318 13:58:28.086278       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 13:58:57.619730       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 13:58:58.097082       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 13:59:27.625475       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 13:59:28.107085       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 13:59:57.635289       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 13:59:58.116817       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:00:27.641202       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:00:28.126098       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:00:57.647614       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:00:58.137311       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:00:59.183605       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="309.428µs"
	I0318 14:01:13.183140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.305905ms"
	E0318 14:01:27.652742       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:01:28.146051       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:01:57.660561       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:01:58.155304       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:02:27.666905       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:02:28.166571       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:02:57.673535       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:02:58.177102       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:03:27.679911       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:03:28.186245       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:03:57.688068       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:03:58.197272       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8dcf9e1b53ce425f8b6695e6e1c5691bae3875e174faf197d4825fd5b78e2f87] <==
	I0318 13:54:59.911432       1 server_others.go:72] "Using iptables proxy"
	I0318 13:54:59.985929       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	I0318 13:55:00.345241       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0318 13:55:00.347702       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:55:00.347951       1 server_others.go:168] "Using iptables Proxier"
	I0318 13:55:00.367657       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:55:00.368034       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0318 13:55:00.368079       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:55:00.371459       1 config.go:188] "Starting service config controller"
	I0318 13:55:00.371513       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:55:00.371534       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:55:00.371544       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:55:00.378108       1 config.go:315] "Starting node config controller"
	I0318 13:55:00.378154       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:55:00.473615       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:55:00.473687       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:55:00.480012       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9ef410e9166d54805c6253404b57accb39b524473e6d16d7c0a84754e5cb7fa1] <==
	W0318 13:54:42.454216       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:54:42.454225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:54:42.456083       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:54:42.456140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:54:43.440813       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:54:43.440923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:54:43.505809       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:54:43.505935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:54:43.528622       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 13:54:43.528680       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 13:54:43.593201       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:54:43.593496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:54:43.636431       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:54:43.636523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:54:43.647718       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 13:54:43.647744       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 13:54:43.663319       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 13:54:43.663449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 13:54:43.669770       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:54:43.669819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:54:43.711218       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 13:54:43.711275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 13:54:43.848258       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:54:43.848380       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:54:46.535186       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:01:46 no-preload-537236 kubelet[4187]: E0318 14:01:46.291780    4187 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:01:46 no-preload-537236 kubelet[4187]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:01:46 no-preload-537236 kubelet[4187]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:01:46 no-preload-537236 kubelet[4187]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:01:46 no-preload-537236 kubelet[4187]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:01:56 no-preload-537236 kubelet[4187]: E0318 14:01:56.167319    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:02:08 no-preload-537236 kubelet[4187]: E0318 14:02:08.168482    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:02:20 no-preload-537236 kubelet[4187]: E0318 14:02:20.167314    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:02:35 no-preload-537236 kubelet[4187]: E0318 14:02:35.166694    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:02:46 no-preload-537236 kubelet[4187]: E0318 14:02:46.168156    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:02:46 no-preload-537236 kubelet[4187]: E0318 14:02:46.289945    4187 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:02:46 no-preload-537236 kubelet[4187]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:02:46 no-preload-537236 kubelet[4187]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:02:46 no-preload-537236 kubelet[4187]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:02:46 no-preload-537236 kubelet[4187]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:03:01 no-preload-537236 kubelet[4187]: E0318 14:03:01.167524    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:03:16 no-preload-537236 kubelet[4187]: E0318 14:03:16.170892    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:03:27 no-preload-537236 kubelet[4187]: E0318 14:03:27.167818    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:03:41 no-preload-537236 kubelet[4187]: E0318 14:03:41.165902    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:03:46 no-preload-537236 kubelet[4187]: E0318 14:03:46.290404    4187 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:03:46 no-preload-537236 kubelet[4187]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:03:46 no-preload-537236 kubelet[4187]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:03:46 no-preload-537236 kubelet[4187]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:03:46 no-preload-537236 kubelet[4187]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:03:52 no-preload-537236 kubelet[4187]: E0318 14:03:52.166961    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	
	
	==> storage-provisioner [a5eff0b76358ea55983537e218c15405bb9546598fa0378d56c1acb15c091de1] <==
	I0318 13:55:00.713713       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 13:55:00.749170       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 13:55:00.749334       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 13:55:00.763610       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 13:55:00.763811       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-537236_e4a483f6-7f49-4c60-9197-dc053405ab92!
	I0318 13:55:00.768088       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e625363-73f0-495a-944d-aa5501d6c9cc", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-537236_e4a483f6-7f49-4c60-9197-dc053405ab92 became leader
	I0318 13:55:00.864158       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-537236_e4a483f6-7f49-4c60-9197-dc053405ab92!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-537236 -n no-preload-537236
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-537236 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-tkq6h
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-537236 describe pod metrics-server-57f55c9bc5-tkq6h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-537236 describe pod metrics-server-57f55c9bc5-tkq6h: exit status 1 (74.891151ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-tkq6h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-537236 describe pod metrics-server-57f55c9bc5-tkq6h: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:04:28.705350282 +0000 UTC m=+6546.092264539
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-569210 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-569210 logs -n 25: (2.231891965s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-599578                           | kubernetes-upgrade-599578    | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:39 UTC |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-760389                                        | pause-760389                 | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:40 UTC |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-173866 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | disable-driver-mounts-173866                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:42 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-173036            | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-537236             | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC | 18 Mar 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-569210  | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC | 18 Mar 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-909137        | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-173036                 | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-537236                  | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-909137             | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-569210       | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:55 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:45:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:45:41.667747 1157887 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:45:41.667937 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.667952 1157887 out.go:304] Setting ErrFile to fd 2...
	I0318 13:45:41.667958 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.668616 1157887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:45:41.669251 1157887 out.go:298] Setting JSON to false
	I0318 13:45:41.670283 1157887 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19689,"bootTime":1710749853,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:45:41.670349 1157887 start.go:139] virtualization: kvm guest
	I0318 13:45:41.672702 1157887 out.go:177] * [default-k8s-diff-port-569210] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:45:41.674325 1157887 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:45:41.674336 1157887 notify.go:220] Checking for updates...
	I0318 13:45:41.675874 1157887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:45:41.677543 1157887 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:45:41.679053 1157887 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:45:41.680344 1157887 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:45:41.681702 1157887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:45:41.683304 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:45:41.683743 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.683792 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.698719 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0318 13:45:41.699154 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.699657 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.699676 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.699995 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.700168 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.700488 1157887 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:45:41.700763 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.700803 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.715824 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0318 13:45:41.716270 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.716688 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.716708 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.717004 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.717185 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.747564 1157887 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:45:41.748930 1157887 start.go:297] selected driver: kvm2
	I0318 13:45:41.748944 1157887 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.749059 1157887 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:45:41.749725 1157887 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.749819 1157887 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:45:41.764225 1157887 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:45:41.764607 1157887 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:45:41.764679 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:45:41.764692 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:45:41.764727 1157887 start.go:340] cluster config:
	{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.764824 1157887 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.766561 1157887 out.go:177] * Starting "default-k8s-diff-port-569210" primary control-plane node in "default-k8s-diff-port-569210" cluster
	I0318 13:45:40.044635 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:41.767747 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:45:41.767779 1157887 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:45:41.767799 1157887 cache.go:56] Caching tarball of preloaded images
	I0318 13:45:41.767876 1157887 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:45:41.767887 1157887 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:45:41.767986 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:45:41.768151 1157887 start.go:360] acquireMachinesLock for default-k8s-diff-port-569210: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:45:46.124607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:49.196561 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:55.276657 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:58.348606 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:04.428632 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:07.500592 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:13.584558 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:16.652578 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:22.732573 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:25.804745 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:31.884579 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:34.956708 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:41.036614 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:44.108576 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:50.188610 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:53.260646 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:59.340724 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:02.412698 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:08.492603 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:11.564634 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:17.644618 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:20.716642 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:26.796585 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:29.868690 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:35.948613 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:39.020607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:45.104563 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:48.172547 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:54.252608 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:57.324659 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:03.404600 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:06.476647 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:12.556609 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:15.628640 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:21.708597 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:24.780572 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:30.860662 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:33.932528 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:40.012616 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:43.084569 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:49.164622 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:52.236652 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:58.316619 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:49:01.321139 1157416 start.go:364] duration metric: took 4m21.279664055s to acquireMachinesLock for "no-preload-537236"
	I0318 13:49:01.321252 1157416 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:01.321260 1157416 fix.go:54] fixHost starting: 
	I0318 13:49:01.321627 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:01.321658 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:01.337337 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0318 13:49:01.337793 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:01.338235 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:49:01.338262 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:01.338703 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:01.338892 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:01.339025 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:49:01.340630 1157416 fix.go:112] recreateIfNeeded on no-preload-537236: state=Stopped err=<nil>
	I0318 13:49:01.340653 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	W0318 13:49:01.340785 1157416 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:01.342565 1157416 out.go:177] * Restarting existing kvm2 VM for "no-preload-537236" ...
	I0318 13:49:01.318340 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:01.318378 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.318795 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:49:01.318829 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.319041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:49:01.321007 1157263 machine.go:97] duration metric: took 4m37.382603693s to provisionDockerMachine
	I0318 13:49:01.321051 1157263 fix.go:56] duration metric: took 4m37.403420427s for fixHost
	I0318 13:49:01.321064 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 4m37.403446357s
	W0318 13:49:01.321088 1157263 start.go:713] error starting host: provision: host is not running
	W0318 13:49:01.321225 1157263 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 13:49:01.321242 1157263 start.go:728] Will try again in 5 seconds ...
	I0318 13:49:01.343844 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Start
	I0318 13:49:01.344003 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring networks are active...
	I0318 13:49:01.344698 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network default is active
	I0318 13:49:01.345062 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network mk-no-preload-537236 is active
	I0318 13:49:01.345378 1157416 main.go:141] libmachine: (no-preload-537236) Getting domain xml...
	I0318 13:49:01.346073 1157416 main.go:141] libmachine: (no-preload-537236) Creating domain...
	I0318 13:49:02.522163 1157416 main.go:141] libmachine: (no-preload-537236) Waiting to get IP...
	I0318 13:49:02.522935 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.523347 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.523420 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.523327 1158392 retry.go:31] will retry after 276.248352ms: waiting for machine to come up
	I0318 13:49:02.800962 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.801439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.801472 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.801381 1158392 retry.go:31] will retry after 318.94167ms: waiting for machine to come up
	I0318 13:49:03.121895 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.122276 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.122298 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.122254 1158392 retry.go:31] will retry after 353.742872ms: waiting for machine to come up
	I0318 13:49:03.477885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.478401 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.478439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.478360 1158392 retry.go:31] will retry after 481.537084ms: waiting for machine to come up
	I0318 13:49:03.960991 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.961432 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.961505 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.961416 1158392 retry.go:31] will retry after 647.244695ms: waiting for machine to come up
	I0318 13:49:04.610150 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:04.610563 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:04.610604 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:04.610512 1158392 retry.go:31] will retry after 577.22264ms: waiting for machine to come up
	I0318 13:49:06.321404 1157263 start.go:360] acquireMachinesLock for embed-certs-173036: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:49:05.189300 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:05.189688 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:05.189722 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:05.189635 1158392 retry.go:31] will retry after 1.064347528s: waiting for machine to come up
	I0318 13:49:06.255734 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:06.256071 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:06.256103 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:06.256016 1158392 retry.go:31] will retry after 1.359025709s: waiting for machine to come up
	I0318 13:49:07.616847 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:07.617313 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:07.617338 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:07.617265 1158392 retry.go:31] will retry after 1.844112s: waiting for machine to come up
	I0318 13:49:09.464239 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:09.464761 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:09.464788 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:09.464703 1158392 retry.go:31] will retry after 1.984375986s: waiting for machine to come up
	I0318 13:49:11.450609 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:11.451100 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:11.451153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:11.451037 1158392 retry.go:31] will retry after 1.944733714s: waiting for machine to come up
	I0318 13:49:13.397815 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:13.398238 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:13.398265 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:13.398190 1158392 retry.go:31] will retry after 2.44494826s: waiting for machine to come up
	I0318 13:49:15.845711 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:15.846169 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:15.846212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:15.846128 1158392 retry.go:31] will retry after 2.760857339s: waiting for machine to come up
	I0318 13:49:18.609516 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:18.609917 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:18.609942 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:18.609872 1158392 retry.go:31] will retry after 3.501792324s: waiting for machine to come up
	I0318 13:49:23.501689 1157708 start.go:364] duration metric: took 4m10.403284517s to acquireMachinesLock for "old-k8s-version-909137"
	I0318 13:49:23.501769 1157708 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:23.501783 1157708 fix.go:54] fixHost starting: 
	I0318 13:49:23.502238 1157708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:23.502279 1157708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:23.520223 1157708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0318 13:49:23.520696 1157708 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:23.521273 1157708 main.go:141] libmachine: Using API Version  1
	I0318 13:49:23.521304 1157708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:23.521693 1157708 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:23.521934 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:23.522089 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetState
	I0318 13:49:23.523696 1157708 fix.go:112] recreateIfNeeded on old-k8s-version-909137: state=Stopped err=<nil>
	I0318 13:49:23.523738 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	W0318 13:49:23.523894 1157708 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:23.526253 1157708 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-909137" ...
	I0318 13:49:22.113291 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.113733 1157416 main.go:141] libmachine: (no-preload-537236) Found IP for machine: 192.168.39.7
	I0318 13:49:22.113753 1157416 main.go:141] libmachine: (no-preload-537236) Reserving static IP address...
	I0318 13:49:22.113787 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has current primary IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.114159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.114179 1157416 main.go:141] libmachine: (no-preload-537236) DBG | skip adding static IP to network mk-no-preload-537236 - found existing host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"}
	I0318 13:49:22.114192 1157416 main.go:141] libmachine: (no-preload-537236) Reserved static IP address: 192.168.39.7
	I0318 13:49:22.114201 1157416 main.go:141] libmachine: (no-preload-537236) Waiting for SSH to be available...
	I0318 13:49:22.114208 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Getting to WaitForSSH function...
	I0318 13:49:22.116603 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.116944 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.116971 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.117082 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH client type: external
	I0318 13:49:22.117153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa (-rw-------)
	I0318 13:49:22.117192 1157416 main.go:141] libmachine: (no-preload-537236) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:22.117212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | About to run SSH command:
	I0318 13:49:22.117236 1157416 main.go:141] libmachine: (no-preload-537236) DBG | exit 0
	I0318 13:49:22.240543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:22.240913 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetConfigRaw
	I0318 13:49:22.241611 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.244016 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244273 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.244302 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244506 1157416 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/config.json ...
	I0318 13:49:22.244729 1157416 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:22.244750 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:22.244947 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.246869 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247160 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.247198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247246 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.247401 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247546 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247722 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.247893 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.248160 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.248174 1157416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:22.353134 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:22.353164 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353435 1157416 buildroot.go:166] provisioning hostname "no-preload-537236"
	I0318 13:49:22.353463 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353636 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.356058 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356463 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.356491 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356645 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.356846 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.356965 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.357068 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.357201 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.357415 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.357434 1157416 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-537236 && echo "no-preload-537236" | sudo tee /etc/hostname
	I0318 13:49:22.477651 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-537236
	
	I0318 13:49:22.477692 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.480537 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.480876 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.480905 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.481135 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.481342 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481520 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481676 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.481887 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.482066 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.482082 1157416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-537236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-537236/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-537236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:22.599489 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:22.599566 1157416 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:22.599596 1157416 buildroot.go:174] setting up certificates
	I0318 13:49:22.599609 1157416 provision.go:84] configureAuth start
	I0318 13:49:22.599624 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.599981 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.602425 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602800 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.602831 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602986 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.605036 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605331 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.605356 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605500 1157416 provision.go:143] copyHostCerts
	I0318 13:49:22.605589 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:22.605600 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:22.605665 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:22.605786 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:22.605795 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:22.605820 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:22.605895 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:22.605904 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:22.605927 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:22.606003 1157416 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.no-preload-537236 san=[127.0.0.1 192.168.39.7 localhost minikube no-preload-537236]
	I0318 13:49:22.810156 1157416 provision.go:177] copyRemoteCerts
	I0318 13:49:22.810249 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:22.810283 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.813018 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813343 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.813376 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813557 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.813743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.813890 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.814080 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:22.898886 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:22.926296 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 13:49:22.953260 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:49:22.981248 1157416 provision.go:87] duration metric: took 381.624842ms to configureAuth
	I0318 13:49:22.981281 1157416 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:22.981459 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:49:22.981573 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.984446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.984848 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.984885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.985061 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.985269 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985405 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985595 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.985728 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.985911 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.985925 1157416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:23.259439 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:23.259470 1157416 machine.go:97] duration metric: took 1.014725867s to provisionDockerMachine
	I0318 13:49:23.259483 1157416 start.go:293] postStartSetup for "no-preload-537236" (driver="kvm2")
	I0318 13:49:23.259518 1157416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:23.259553 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.259937 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:23.259976 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.262875 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263196 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.263228 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263403 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.263684 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.263861 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.264029 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.348815 1157416 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:23.353550 1157416 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:23.353582 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:23.353659 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:23.353759 1157416 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:23.353885 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:23.364831 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:23.391345 1157416 start.go:296] duration metric: took 131.846395ms for postStartSetup
	I0318 13:49:23.391396 1157416 fix.go:56] duration metric: took 22.070135111s for fixHost
	I0318 13:49:23.391423 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.394229 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.394583 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394685 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.394937 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395111 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395266 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.395433 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:23.395619 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:23.395631 1157416 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:23.501504 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769763.449975975
	
	I0318 13:49:23.501532 1157416 fix.go:216] guest clock: 1710769763.449975975
	I0318 13:49:23.501542 1157416 fix.go:229] Guest: 2024-03-18 13:49:23.449975975 +0000 UTC Remote: 2024-03-18 13:49:23.39140181 +0000 UTC m=+283.498114537 (delta=58.574165ms)
	I0318 13:49:23.501564 1157416 fix.go:200] guest clock delta is within tolerance: 58.574165ms
	I0318 13:49:23.501584 1157416 start.go:83] releasing machines lock for "no-preload-537236", held for 22.180386627s
	I0318 13:49:23.501612 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.501900 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:23.504693 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505130 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.505159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505331 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.505889 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506092 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506198 1157416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:23.506252 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.506317 1157416 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:23.506351 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.509104 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509414 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509465 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509625 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.509819 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509839 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509853 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510043 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510103 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.510207 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.510261 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510394 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510541 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.616831 1157416 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:23.624184 1157416 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:23.779709 1157416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:23.786535 1157416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:23.786594 1157416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:23.805716 1157416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:23.805743 1157416 start.go:494] detecting cgroup driver to use...
	I0318 13:49:23.805850 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:23.825572 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:23.842762 1157416 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:23.842817 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:23.859385 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:23.876416 1157416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:24.005995 1157416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:24.193107 1157416 docker.go:233] disabling docker service ...
	I0318 13:49:24.193173 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:24.212825 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:24.230448 1157416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:24.385445 1157416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:24.548640 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:24.564678 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:24.592528 1157416 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:49:24.592601 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.604303 1157416 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:24.604394 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.616123 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.627956 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.639194 1157416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:24.650789 1157416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:24.661390 1157416 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:24.661443 1157416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:24.677180 1157416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:24.687973 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:24.827386 1157416 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:24.978805 1157416 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:24.978898 1157416 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:24.985647 1157416 start.go:562] Will wait 60s for crictl version
	I0318 13:49:24.985735 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:24.990325 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:25.038948 1157416 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:25.039020 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.068855 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.107104 1157416 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 13:49:23.527811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .Start
	I0318 13:49:23.528000 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring networks are active...
	I0318 13:49:23.528714 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network default is active
	I0318 13:49:23.529036 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network mk-old-k8s-version-909137 is active
	I0318 13:49:23.529491 1157708 main.go:141] libmachine: (old-k8s-version-909137) Getting domain xml...
	I0318 13:49:23.530324 1157708 main.go:141] libmachine: (old-k8s-version-909137) Creating domain...
	I0318 13:49:24.765648 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting to get IP...
	I0318 13:49:24.766664 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:24.767122 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:24.767182 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:24.767081 1158507 retry.go:31] will retry after 250.785143ms: waiting for machine to come up
	I0318 13:49:25.019755 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.020238 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.020273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.020185 1158507 retry.go:31] will retry after 346.894257ms: waiting for machine to come up
	I0318 13:49:25.368815 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.369335 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.369372 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.369268 1158507 retry.go:31] will retry after 367.316359ms: waiting for machine to come up
	I0318 13:49:25.737835 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.738404 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.738438 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.738337 1158507 retry.go:31] will retry after 479.291041ms: waiting for machine to come up
	I0318 13:49:26.219103 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.219568 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.219599 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.219523 1158507 retry.go:31] will retry after 552.309382ms: waiting for machine to come up
	I0318 13:49:26.773363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.773905 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.773935 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.773857 1158507 retry.go:31] will retry after 703.087388ms: waiting for machine to come up
	I0318 13:49:27.478730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:27.479330 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:27.479363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:27.479270 1158507 retry.go:31] will retry after 1.136606935s: waiting for machine to come up
	I0318 13:49:25.108504 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:25.111416 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.111795 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:25.111827 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.112035 1157416 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:25.116688 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:25.131526 1157416 kubeadm.go:877] updating cluster {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:25.131663 1157416 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 13:49:25.131698 1157416 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:25.176340 1157416 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 13:49:25.176378 1157416 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:25.176474 1157416 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.176487 1157416 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.176524 1157416 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.176537 1157416 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.176592 1157416 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.176619 1157416 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.176773 1157416 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 13:49:25.176789 1157416 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178485 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.178486 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.178488 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.178480 1157416 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.178540 1157416 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 13:49:25.178911 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334172 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334873 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 13:49:25.338330 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.338825 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.340192 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.350053 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.356621 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.472528 1157416 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 13:49:25.472571 1157416 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.472627 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.630923 1157416 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 13:49:25.630996 1157416 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.631001 1157416 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 13:49:25.631042 1157416 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.630933 1157416 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 13:49:25.631089 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631102 1157416 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 13:49:25.631134 1157416 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.631107 1157416 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.631169 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631183 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631052 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631199 1157416 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 13:49:25.631220 1157416 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.631233 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.631264 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.642598 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.708001 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.708026 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708068 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.708003 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.708129 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708162 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.708225 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.708286 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.790492 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.790623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.804436 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804465 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804503 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 13:49:25.804532 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804583 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:25.804657 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 13:49:25.804684 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804720 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804768 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804801 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:25.807681 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 13:49:26.162719 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.887846 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.083277557s)
	I0318 13:49:27.887882 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.083274384s)
	I0318 13:49:27.887894 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 13:49:27.887916 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 13:49:27.887927 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.887944 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.083121634s)
	I0318 13:49:27.887971 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 13:49:27.887971 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.083181595s)
	I0318 13:49:27.887990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 13:49:27.888003 1157416 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.725256044s)
	I0318 13:49:27.888008 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.888040 1157416 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 13:49:27.888080 1157416 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.888114 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:27.893415 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:28.617273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:28.617711 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:28.617740 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:28.617665 1158507 retry.go:31] will retry after 947.818334ms: waiting for machine to come up
	I0318 13:49:29.566814 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:29.567157 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:29.567177 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:29.567121 1158507 retry.go:31] will retry after 1.328243934s: waiting for machine to come up
	I0318 13:49:30.897514 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:30.898041 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:30.898068 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:30.897988 1158507 retry.go:31] will retry after 2.213855703s: waiting for machine to come up
	I0318 13:49:30.272393 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.384351202s)
	I0318 13:49:30.272442 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 13:49:30.272459 1157416 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.379011748s)
	I0318 13:49:30.272477 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272508 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 13:49:30.272589 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:32.857821 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.585192694s)
	I0318 13:49:32.857907 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.585263486s)
	I0318 13:49:32.857990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 13:49:32.857918 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 13:49:32.858038 1157416 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:32.858097 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:33.113781 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:33.114303 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:33.114332 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:33.114245 1158507 retry.go:31] will retry after 2.075415123s: waiting for machine to come up
	I0318 13:49:35.191096 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:35.191631 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:35.191665 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:35.191582 1158507 retry.go:31] will retry after 3.520577528s: waiting for machine to come up
	I0318 13:49:36.677356 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.8192286s)
	I0318 13:49:36.677398 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 13:49:36.677423 1157416 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:36.677464 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:38.844843 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.167353366s)
	I0318 13:49:38.844895 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 13:49:38.844933 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.845020 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.713777 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:38.714129 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:38.714242 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:38.714143 1158507 retry.go:31] will retry after 3.46520277s: waiting for machine to come up
	I0318 13:49:42.181399 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181856 1157708 main.go:141] libmachine: (old-k8s-version-909137) Found IP for machine: 192.168.72.135
	I0318 13:49:42.181888 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has current primary IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181897 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserving static IP address...
	I0318 13:49:42.182344 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.182387 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | skip adding static IP to network mk-old-k8s-version-909137 - found existing host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"}
	I0318 13:49:42.182424 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserved static IP address: 192.168.72.135
	I0318 13:49:42.182453 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting for SSH to be available...
	I0318 13:49:42.182470 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Getting to WaitForSSH function...
	I0318 13:49:42.184589 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.184958 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.184999 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.185061 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH client type: external
	I0318 13:49:42.185120 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa (-rw-------)
	I0318 13:49:42.185162 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:42.185189 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | About to run SSH command:
	I0318 13:49:42.185204 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | exit 0
	I0318 13:49:42.312570 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:42.313005 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetConfigRaw
	I0318 13:49:42.313693 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.316497 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.316931 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.316965 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.317239 1157708 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json ...
	I0318 13:49:42.317442 1157708 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:42.317462 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:42.317688 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.320076 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320444 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.320485 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320655 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.320818 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.320980 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.321093 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.321257 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.321510 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.321528 1157708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:42.433138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:42.433186 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433524 1157708 buildroot.go:166] provisioning hostname "old-k8s-version-909137"
	I0318 13:49:42.433558 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433808 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.436869 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437230 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.437264 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437506 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.437739 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.437915 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.438092 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.438285 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.438513 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.438534 1157708 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-909137 && echo "old-k8s-version-909137" | sudo tee /etc/hostname
	I0318 13:49:42.560410 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-909137
	
	I0318 13:49:42.560439 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.563304 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563637 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.563673 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563837 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.564053 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564236 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564377 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.564581 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.564802 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.564820 1157708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-909137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-909137/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-909137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:42.687138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:42.687173 1157708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:42.687199 1157708 buildroot.go:174] setting up certificates
	I0318 13:49:42.687211 1157708 provision.go:84] configureAuth start
	I0318 13:49:42.687223 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.687600 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.690738 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.691179 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691316 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.693730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694070 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.694092 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694255 1157708 provision.go:143] copyHostCerts
	I0318 13:49:42.694336 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:42.694350 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:42.694422 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:42.694597 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:42.694614 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:42.694652 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:42.694747 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:42.694756 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:42.694775 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:42.694823 1157708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-909137 san=[127.0.0.1 192.168.72.135 localhost minikube old-k8s-version-909137]
	I0318 13:49:42.920182 1157708 provision.go:177] copyRemoteCerts
	I0318 13:49:42.920255 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:42.920295 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.923074 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923374 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.923408 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923533 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.923755 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.923957 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.924095 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.649771 1157887 start.go:364] duration metric: took 4m1.881584436s to acquireMachinesLock for "default-k8s-diff-port-569210"
	I0318 13:49:43.649850 1157887 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:43.649868 1157887 fix.go:54] fixHost starting: 
	I0318 13:49:43.650335 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:43.650378 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:43.668606 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0318 13:49:43.669107 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:43.669721 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:49:43.669755 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:43.670092 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:43.670269 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:49:43.670427 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:49:43.671973 1157887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-569210: state=Stopped err=<nil>
	I0318 13:49:43.672021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	W0318 13:49:43.672150 1157887 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:43.673832 1157887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-569210" ...
	I0318 13:49:40.621208 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.776156882s)
	I0318 13:49:40.621252 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 13:49:40.621281 1157416 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:40.621322 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:41.582256 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 13:49:41.582316 1157416 cache_images.go:123] Successfully loaded all cached images
	I0318 13:49:41.582324 1157416 cache_images.go:92] duration metric: took 16.405930257s to LoadCachedImages
	I0318 13:49:41.582341 1157416 kubeadm.go:928] updating node { 192.168.39.7 8443 v1.29.0-rc.2 crio true true} ...
	I0318 13:49:41.582550 1157416 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-537236 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:41.582663 1157416 ssh_runner.go:195] Run: crio config
	I0318 13:49:41.635043 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:41.635074 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:41.635093 1157416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:41.635128 1157416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-537236 NodeName:no-preload-537236 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:49:41.635322 1157416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-537236"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:41.635446 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 13:49:41.647072 1157416 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:41.647148 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:41.657448 1157416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0318 13:49:41.675819 1157416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 13:49:41.693989 1157416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0318 13:49:41.714954 1157416 ssh_runner.go:195] Run: grep 192.168.39.7	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:41.719161 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:41.732228 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:41.871286 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:41.892827 1157416 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236 for IP: 192.168.39.7
	I0318 13:49:41.892850 1157416 certs.go:194] generating shared ca certs ...
	I0318 13:49:41.892868 1157416 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:41.893054 1157416 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:41.893110 1157416 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:41.893125 1157416 certs.go:256] generating profile certs ...
	I0318 13:49:41.893246 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/client.key
	I0318 13:49:41.893317 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key.844e83a6
	I0318 13:49:41.893366 1157416 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key
	I0318 13:49:41.893482 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:41.893518 1157416 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:41.893528 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:41.893552 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:41.893573 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:41.893594 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:41.893628 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:41.894503 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:41.942278 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:41.978436 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:42.007161 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:42.036410 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:49:42.073179 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:42.098201 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:42.131599 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:42.159159 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:42.186290 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:42.214362 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:42.241240 1157416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:42.260511 1157416 ssh_runner.go:195] Run: openssl version
	I0318 13:49:42.267047 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:42.278582 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283566 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283609 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.289658 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:42.300954 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:42.312828 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319182 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319251 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.325767 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:42.337544 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:42.349053 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354197 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354249 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.361200 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:42.374825 1157416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:42.380098 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:42.387161 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:42.393702 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:42.400193 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:42.406243 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:42.412423 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:42.418599 1157416 kubeadm.go:391] StartCluster: {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:42.418747 1157416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:42.418785 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.468980 1157416 cri.go:89] found id: ""
	I0318 13:49:42.469088 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:42.481101 1157416 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:42.481130 1157416 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:42.481137 1157416 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:42.481190 1157416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:42.493014 1157416 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:42.494041 1157416 kubeconfig.go:125] found "no-preload-537236" server: "https://192.168.39.7:8443"
	I0318 13:49:42.496519 1157416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:42.507415 1157416 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.7
	I0318 13:49:42.507448 1157416 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:42.507460 1157416 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:42.507513 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.554791 1157416 cri.go:89] found id: ""
	I0318 13:49:42.554859 1157416 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:42.574054 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:42.584928 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:42.584955 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:42.585009 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:42.594987 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:42.595045 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:42.605058 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:42.614968 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:42.615042 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:42.625169 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.634838 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:42.634905 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.644785 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:42.654196 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:42.654254 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:42.663757 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:42.673956 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:42.792913 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:43.799012 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.006050828s)
	I0318 13:49:43.799075 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.061808 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.189349 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.329800 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:44.329897 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:44.829990 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:43.007024 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:43.033952 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 13:49:43.060218 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:49:43.086087 1157708 provision.go:87] duration metric: took 398.861833ms to configureAuth
	I0318 13:49:43.086116 1157708 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:43.086326 1157708 config.go:182] Loaded profile config "old-k8s-version-909137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 13:49:43.086442 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.089200 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089534 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.089562 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089758 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.089965 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090134 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090286 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.090501 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.090718 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.090744 1157708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:43.401681 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:43.401715 1157708 machine.go:97] duration metric: took 1.084258164s to provisionDockerMachine
	I0318 13:49:43.401728 1157708 start.go:293] postStartSetup for "old-k8s-version-909137" (driver="kvm2")
	I0318 13:49:43.401739 1157708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:43.401759 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.402073 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:43.402116 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.404775 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405164 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.405192 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405335 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.405525 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.405740 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.405884 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.493000 1157708 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:43.497705 1157708 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:43.497740 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:43.497818 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:43.497931 1157708 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:43.498058 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:43.509185 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:43.535401 1157708 start.go:296] duration metric: took 133.657179ms for postStartSetup
	I0318 13:49:43.535454 1157708 fix.go:56] duration metric: took 20.033670705s for fixHost
	I0318 13:49:43.535482 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.538464 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.538964 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.538998 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.539178 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.539386 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539528 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539702 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.539899 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.540120 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.540133 1157708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:43.649578 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769783.596310102
	
	I0318 13:49:43.649610 1157708 fix.go:216] guest clock: 1710769783.596310102
	I0318 13:49:43.649621 1157708 fix.go:229] Guest: 2024-03-18 13:49:43.596310102 +0000 UTC Remote: 2024-03-18 13:49:43.535459129 +0000 UTC m=+270.592972067 (delta=60.850973ms)
	I0318 13:49:43.649656 1157708 fix.go:200] guest clock delta is within tolerance: 60.850973ms
	I0318 13:49:43.649663 1157708 start.go:83] releasing machines lock for "old-k8s-version-909137", held for 20.147918331s
	I0318 13:49:43.649689 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.650002 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:43.652712 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653114 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.653148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653278 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.653873 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654112 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654198 1157708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:43.654264 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.654333 1157708 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:43.654369 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.657281 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657390 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657741 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.657830 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657855 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657918 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.658016 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658065 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.658199 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658245 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658326 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.658411 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658574 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.737787 1157708 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:43.769157 1157708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:43.920376 1157708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:43.928165 1157708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:43.928253 1157708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:43.946102 1157708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:43.946133 1157708 start.go:494] detecting cgroup driver to use...
	I0318 13:49:43.946210 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:43.963482 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:43.978540 1157708 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:43.978613 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:43.999525 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:44.021242 1157708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:44.198165 1157708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:44.363408 1157708 docker.go:233] disabling docker service ...
	I0318 13:49:44.363474 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:44.383527 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:44.398888 1157708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:44.547711 1157708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:44.662762 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:44.678786 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:44.702931 1157708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 13:49:44.703004 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.721453 1157708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:44.721519 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.739487 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.757379 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.777508 1157708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:44.798788 1157708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:44.814280 1157708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:44.814383 1157708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:44.836507 1157708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:44.852614 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:44.994352 1157708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:45.184815 1157708 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:45.184907 1157708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:45.190649 1157708 start.go:562] Will wait 60s for crictl version
	I0318 13:49:45.190724 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:45.195265 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:45.242737 1157708 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:45.242850 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.288154 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.331441 1157708 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 13:49:43.675531 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Start
	I0318 13:49:43.675763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring networks are active...
	I0318 13:49:43.676642 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network default is active
	I0318 13:49:43.677014 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network mk-default-k8s-diff-port-569210 is active
	I0318 13:49:43.677510 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Getting domain xml...
	I0318 13:49:43.678319 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Creating domain...
	I0318 13:49:45.002977 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting to get IP...
	I0318 13:49:45.003870 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004406 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004499 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.004392 1158648 retry.go:31] will retry after 294.950888ms: waiting for machine to come up
	I0318 13:49:45.301264 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301835 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301863 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.301747 1158648 retry.go:31] will retry after 291.810051ms: waiting for machine to come up
	I0318 13:49:45.595571 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596720 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596832 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.596786 1158648 retry.go:31] will retry after 390.232445ms: waiting for machine to come up
	I0318 13:49:45.988661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989534 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.989393 1158648 retry.go:31] will retry after 487.148784ms: waiting for machine to come up
	I0318 13:49:46.477982 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478667 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478701 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.478600 1158648 retry.go:31] will retry after 474.795485ms: waiting for machine to come up
	I0318 13:49:45.332975 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:45.336274 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336701 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:45.336753 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336985 1157708 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:45.343147 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:45.361840 1157708 kubeadm.go:877] updating cluster {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:45.361982 1157708 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:49:45.362040 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:45.419490 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:45.419587 1157708 ssh_runner.go:195] Run: which lz4
	I0318 13:49:45.424689 1157708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:49:45.431110 1157708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:49:45.431155 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 13:49:47.510385 1157708 crio.go:444] duration metric: took 2.085724633s to copy over tarball
	I0318 13:49:47.510483 1157708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:49:45.330925 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:45.364854 1157416 api_server.go:72] duration metric: took 1.035057096s to wait for apiserver process to appear ...
	I0318 13:49:45.364883 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:49:45.364927 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:45.365577 1157416 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": dial tcp 192.168.39.7:8443: connect: connection refused
	I0318 13:49:45.865126 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.135799 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.135840 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.135862 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.154112 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.154142 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.365566 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.375812 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.375862 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:49.865027 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.873132 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.873176 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.365178 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.371461 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.371506 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.865038 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.870329 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.870383 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:51.365030 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:51.370284 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:49:51.379599 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:49:51.379633 1157416 api_server.go:131] duration metric: took 6.014741397s to wait for apiserver health ...
	I0318 13:49:51.379645 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:51.379654 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:51.582399 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:49:46.955128 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955620 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955649 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.955579 1158648 retry.go:31] will retry after 817.278037ms: waiting for machine to come up
	I0318 13:49:47.774954 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775449 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775480 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:47.775391 1158648 retry.go:31] will retry after 1.032655883s: waiting for machine to come up
	I0318 13:49:48.810156 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810699 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810730 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:48.810644 1158648 retry.go:31] will retry after 1.1441145s: waiting for machine to come up
	I0318 13:49:49.956702 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957179 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957214 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:49.957105 1158648 retry.go:31] will retry after 1.428592019s: waiting for machine to come up
	I0318 13:49:51.387025 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387627 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387660 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:51.387555 1158648 retry.go:31] will retry after 2.266795202s: waiting for machine to come up
	I0318 13:49:50.947045 1157708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.436514023s)
	I0318 13:49:50.947084 1157708 crio.go:451] duration metric: took 3.436661543s to extract the tarball
	I0318 13:49:50.947095 1157708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:49:51.007406 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:51.048060 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:51.048091 1157708 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:51.048181 1157708 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.048228 1157708 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.048287 1157708 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.048346 1157708 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 13:49:51.048398 1157708 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.048432 1157708 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.048232 1157708 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.048183 1157708 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.049960 1157708 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.050268 1157708 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.050288 1157708 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.050355 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.050594 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.050627 1157708 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 13:49:51.050584 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.051230 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.219906 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.220734 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.235283 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.236445 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.246700 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 13:49:51.251299 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.311054 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.311292 1157708 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 13:49:51.311336 1157708 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.311389 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.343594 1157708 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 13:49:51.343649 1157708 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.343739 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.391608 1157708 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 13:49:51.391657 1157708 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.391706 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.448987 1157708 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 13:49:51.449029 1157708 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 13:49:51.449058 1157708 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.449061 1157708 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 13:49:51.449088 1157708 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.449103 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449035 1157708 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 13:49:51.449135 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.449178 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449207 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.449245 1157708 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 13:49:51.449267 1157708 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.449317 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449210 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449223 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.469614 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.469613 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.562455 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 13:49:51.562506 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.564170 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 13:49:51.564269 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 13:49:51.578471 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 13:49:51.615689 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 13:49:51.615708 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 13:49:51.657287 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 13:49:51.657361 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 13:49:51.956746 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:52.106933 1157708 cache_images.go:92] duration metric: took 1.058823514s to LoadCachedImages
	W0318 13:49:52.107046 1157708 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0318 13:49:52.107064 1157708 kubeadm.go:928] updating node { 192.168.72.135 8443 v1.20.0 crio true true} ...
	I0318 13:49:52.107259 1157708 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-909137 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:52.107348 1157708 ssh_runner.go:195] Run: crio config
	I0318 13:49:52.163493 1157708 cni.go:84] Creating CNI manager for ""
	I0318 13:49:52.163526 1157708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:52.163546 1157708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:52.163572 1157708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.135 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-909137 NodeName:old-k8s-version-909137 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 13:49:52.163740 1157708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-909137"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:52.163818 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 13:49:52.175668 1157708 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:52.175740 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:52.186745 1157708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 13:49:52.209877 1157708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:49:52.232921 1157708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 13:49:52.256571 1157708 ssh_runner.go:195] Run: grep 192.168.72.135	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:52.262776 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:52.278435 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:52.422705 1157708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:52.443710 1157708 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137 for IP: 192.168.72.135
	I0318 13:49:52.443740 1157708 certs.go:194] generating shared ca certs ...
	I0318 13:49:52.443760 1157708 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:52.443951 1157708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:52.444009 1157708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:52.444023 1157708 certs.go:256] generating profile certs ...
	I0318 13:49:52.444155 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.key
	I0318 13:49:52.444239 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key.e9806bd6
	I0318 13:49:52.444303 1157708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key
	I0318 13:49:52.444492 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:52.444532 1157708 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:52.444548 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:52.444585 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:52.444633 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:52.444672 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:52.444729 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:52.445363 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:52.506720 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:52.550057 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:52.586845 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:52.627933 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 13:49:52.681479 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:52.722052 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:52.755021 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:52.782181 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:52.808269 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:52.835041 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:52.863776 1157708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:52.883579 1157708 ssh_runner.go:195] Run: openssl version
	I0318 13:49:52.889846 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:52.902288 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908241 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908302 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.915392 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:52.928374 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:52.941444 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946463 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946514 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.953447 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:52.966231 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:52.977986 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982748 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982809 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.988715 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:51.626774 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:49:51.642685 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:49:51.669902 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:49:51.759474 1157416 system_pods.go:59] 8 kube-system pods found
	I0318 13:49:51.759519 1157416 system_pods.go:61] "coredns-76f75df574-kxzfm" [d0aad76d-f135-4d4a-a2f5-117707b4b2f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:49:51.759530 1157416 system_pods.go:61] "etcd-no-preload-537236" [d02ad01c-1b16-4b97-be18-237b1cbfe3aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:49:51.759539 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [00b05050-229b-47f4-9af2-12be1711200a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:49:51.759548 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [3e7b86df-4111-4bd9-8925-a22cf12e10ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:49:51.759552 1157416 system_pods.go:61] "kube-proxy-5dspp" [adee19a0-eeb6-438f-a55d-30f1e1d87ef6] Running
	I0318 13:49:51.759557 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [17628d51-80f5-4985-8ddb-151cab8f8c5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:49:51.759562 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-hhh5m" [282de489-beee-47a9-bd29-5da43cf70146] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:49:51.759565 1157416 system_pods.go:61] "storage-provisioner" [97d3de68-0863-4bba-9cb1-2ce98d791935] Running
	I0318 13:49:51.759578 1157416 system_pods.go:74] duration metric: took 89.654007ms to wait for pod list to return data ...
	I0318 13:49:51.759591 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:49:51.764164 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:49:51.764191 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:49:51.764204 1157416 node_conditions.go:105] duration metric: took 4.607295ms to run NodePressure ...
	I0318 13:49:51.764227 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:52.645812 1157416 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653573 1157416 kubeadm.go:733] kubelet initialised
	I0318 13:49:52.653602 1157416 kubeadm.go:734] duration metric: took 7.75557ms waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653614 1157416 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:49:52.662179 1157416 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:54.678656 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:53.656476 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656913 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656943 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:53.656870 1158648 retry.go:31] will retry after 2.341702781s: waiting for machine to come up
	I0318 13:49:56.001662 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002163 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002188 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:56.002106 1158648 retry.go:31] will retry after 2.885262489s: waiting for machine to come up
	I0318 13:49:53.000141 1157708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:53.005021 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:53.011156 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:53.018329 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:53.025687 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:53.032199 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:53.039048 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:53.045789 1157708 kubeadm.go:391] StartCluster: {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:53.045882 1157708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:53.045931 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.085682 1157708 cri.go:89] found id: ""
	I0318 13:49:53.085788 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:53.098063 1157708 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:53.098091 1157708 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:53.098098 1157708 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:53.098153 1157708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:53.109692 1157708 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:53.110853 1157708 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-909137" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:49:53.111862 1157708 kubeconfig.go:62] /home/jenkins/minikube-integration/18429-1106816/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-909137" cluster setting kubeconfig missing "old-k8s-version-909137" context setting]
	I0318 13:49:53.113334 1157708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:53.115135 1157708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:53.125910 1157708 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.135
	I0318 13:49:53.125949 1157708 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:53.125965 1157708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:53.126029 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.172181 1157708 cri.go:89] found id: ""
	I0318 13:49:53.172268 1157708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:53.189585 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:53.200744 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:53.200768 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:53.200811 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:53.211176 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:53.211250 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:53.221744 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:53.231342 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:53.231404 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:53.242162 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.252408 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:53.252480 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.262690 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:53.272829 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:53.272903 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:53.283287 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:53.294124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:53.437482 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.297415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.588919 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.758204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.863030 1157708 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:54.863140 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.363708 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.863301 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.364064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.863896 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.363240 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.212652 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:57.669562 1157416 pod_ready.go:92] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:57.669584 1157416 pod_ready.go:81] duration metric: took 5.007366512s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:57.669597 1157416 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176528 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:58.176557 1157416 pod_ready.go:81] duration metric: took 506.95201ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176570 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.888400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888706 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888742 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:58.888681 1158648 retry.go:31] will retry after 4.094701536s: waiting for machine to come up
	I0318 13:49:58.363294 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:58.864051 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.363586 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.863802 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.363862 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.864277 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.363381 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.864307 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.363278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.863315 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.309987 1157263 start.go:364] duration metric: took 57.988518292s to acquireMachinesLock for "embed-certs-173036"
	I0318 13:50:04.310046 1157263 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:50:04.310062 1157263 fix.go:54] fixHost starting: 
	I0318 13:50:04.310469 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:50:04.310506 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:50:04.330585 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0318 13:50:04.331049 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:50:04.331648 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:50:04.331684 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:50:04.332066 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:50:04.332316 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:04.332513 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:50:04.334091 1157263 fix.go:112] recreateIfNeeded on embed-certs-173036: state=Stopped err=<nil>
	I0318 13:50:04.334117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	W0318 13:50:04.334299 1157263 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:50:04.336146 1157263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-173036" ...
	I0318 13:50:00.184168 1157416 pod_ready.go:102] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:01.183846 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:01.183872 1157416 pod_ready.go:81] duration metric: took 3.007292631s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:01.183884 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:03.206725 1157416 pod_ready.go:102] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:04.691357 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.691391 1157416 pod_ready.go:81] duration metric: took 3.507497259s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.691410 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696593 1157416 pod_ready.go:92] pod "kube-proxy-5dspp" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.696618 1157416 pod_ready.go:81] duration metric: took 5.198628ms for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696627 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.700977 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.700995 1157416 pod_ready.go:81] duration metric: took 4.36095ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.701006 1157416 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:02.985340 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985804 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has current primary IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985818 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Found IP for machine: 192.168.61.3
	I0318 13:50:02.985828 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserving static IP address...
	I0318 13:50:02.986233 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.986292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | skip adding static IP to network mk-default-k8s-diff-port-569210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"}
	I0318 13:50:02.986307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserved static IP address: 192.168.61.3
	I0318 13:50:02.986321 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for SSH to be available...
	I0318 13:50:02.986337 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Getting to WaitForSSH function...
	I0318 13:50:02.988609 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.988962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.988995 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.989209 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH client type: external
	I0318 13:50:02.989235 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa (-rw-------)
	I0318 13:50:02.989272 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:02.989293 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | About to run SSH command:
	I0318 13:50:02.989306 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | exit 0
	I0318 13:50:03.112557 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:03.112907 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetConfigRaw
	I0318 13:50:03.113605 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.116140 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116569 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.116599 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116858 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:50:03.117065 1157887 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:03.117091 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:03.117296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.119506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.119861 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.119891 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.120015 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.120212 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120429 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120608 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.120798 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.120995 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.121010 1157887 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:03.221645 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:03.221693 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.221990 1157887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-569210"
	I0318 13:50:03.222027 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.222257 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.225134 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225543 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.225568 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225714 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.226022 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226225 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.226595 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.226870 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.226893 1157887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-569210 && echo "default-k8s-diff-port-569210" | sudo tee /etc/hostname
	I0318 13:50:03.350362 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-569210
	
	I0318 13:50:03.350398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.353307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353700 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.353737 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353911 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.354111 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354283 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354413 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.354600 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.354805 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.354824 1157887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-569210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-569210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-569210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:03.471084 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:03.471120 1157887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:03.471159 1157887 buildroot.go:174] setting up certificates
	I0318 13:50:03.471229 1157887 provision.go:84] configureAuth start
	I0318 13:50:03.471247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.471576 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.474528 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.474918 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.474957 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.475210 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.477624 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.477910 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.477936 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.478118 1157887 provision.go:143] copyHostCerts
	I0318 13:50:03.478196 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:03.478213 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:03.478281 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:03.478424 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:03.478437 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:03.478466 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:03.478537 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:03.478548 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:03.478571 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:03.478640 1157887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-569210 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-569210 localhost minikube]
	I0318 13:50:03.600956 1157887 provision.go:177] copyRemoteCerts
	I0318 13:50:03.601028 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:03.601058 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.603986 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604437 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.604468 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604659 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.604922 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.605086 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.605260 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:03.688256 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 13:50:03.716748 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:03.744848 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:03.771601 1157887 provision.go:87] duration metric: took 300.358039ms to configureAuth
	I0318 13:50:03.771631 1157887 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:03.771893 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:03.771992 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.774410 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774725 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.774760 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774926 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.775099 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775456 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.775642 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.775872 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.775901 1157887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:04.068202 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:04.068242 1157887 machine.go:97] duration metric: took 951.160051ms to provisionDockerMachine
	I0318 13:50:04.068259 1157887 start.go:293] postStartSetup for "default-k8s-diff-port-569210" (driver="kvm2")
	I0318 13:50:04.068277 1157887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:04.068303 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.068677 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:04.068712 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.071619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.071974 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.072002 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.072148 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.072354 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.072519 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.072639 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.157469 1157887 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:04.162629 1157887 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:04.162655 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:04.162719 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:04.162810 1157887 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:04.162911 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:04.173898 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:04.204771 1157887 start.go:296] duration metric: took 136.495479ms for postStartSetup
	I0318 13:50:04.204814 1157887 fix.go:56] duration metric: took 20.554947186s for fixHost
	I0318 13:50:04.204839 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.207619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.207923 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.207951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.208088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.208296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208509 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208657 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.208801 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:04.208975 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:04.208988 1157887 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:04.309828 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769804.283357411
	
	I0318 13:50:04.309861 1157887 fix.go:216] guest clock: 1710769804.283357411
	I0318 13:50:04.309871 1157887 fix.go:229] Guest: 2024-03-18 13:50:04.283357411 +0000 UTC Remote: 2024-03-18 13:50:04.204818975 +0000 UTC m=+262.583280441 (delta=78.538436ms)
	I0318 13:50:04.309898 1157887 fix.go:200] guest clock delta is within tolerance: 78.538436ms
	I0318 13:50:04.309904 1157887 start.go:83] releasing machines lock for "default-k8s-diff-port-569210", held for 20.660081187s
	I0318 13:50:04.309933 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.310247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:04.313302 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313747 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.313777 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313956 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314591 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314792 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314878 1157887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:04.314934 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.315014 1157887 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:04.315059 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.318021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318056 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318438 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318474 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318500 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318518 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318879 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.318962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.319052 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319110 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319229 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.319286 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.426710 1157887 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:04.433482 1157887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:04.590331 1157887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:04.598896 1157887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:04.598974 1157887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:04.617060 1157887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:04.617095 1157887 start.go:494] detecting cgroup driver to use...
	I0318 13:50:04.617190 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:04.633902 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:04.648705 1157887 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:04.648759 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:04.665516 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:04.681326 1157887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:04.798310 1157887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:04.972066 1157887 docker.go:233] disabling docker service ...
	I0318 13:50:04.972133 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:04.995498 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:05.014901 1157887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:05.158158 1157887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:05.309791 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:05.324965 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:05.346489 1157887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:05.346595 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.358753 1157887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:05.358823 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.374416 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.394228 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.406975 1157887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:05.420201 1157887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:05.432405 1157887 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:05.432479 1157887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:05.449386 1157887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:05.461081 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:05.607102 1157887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:05.776152 1157887 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:05.776267 1157887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:05.782168 1157887 start.go:562] Will wait 60s for crictl version
	I0318 13:50:05.782247 1157887 ssh_runner.go:195] Run: which crictl
	I0318 13:50:05.787932 1157887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:05.831304 1157887 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:05.831399 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.865410 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.908406 1157887 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:05.909651 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:05.912855 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913213 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:05.913256 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913470 1157887 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:05.918362 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:05.933755 1157887 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:05.933926 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:05.934002 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:05.978920 1157887 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:05.978998 1157887 ssh_runner.go:195] Run: which lz4
	I0318 13:50:05.983751 1157887 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:05.988862 1157887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:05.988895 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:03.363591 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:03.864049 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.363310 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.863306 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.363706 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.863618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.364183 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.863776 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.863261 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.337631 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Start
	I0318 13:50:04.337838 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring networks are active...
	I0318 13:50:04.338615 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network default is active
	I0318 13:50:04.338978 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network mk-embed-certs-173036 is active
	I0318 13:50:04.339444 1157263 main.go:141] libmachine: (embed-certs-173036) Getting domain xml...
	I0318 13:50:04.340295 1157263 main.go:141] libmachine: (embed-certs-173036) Creating domain...
	I0318 13:50:05.616437 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting to get IP...
	I0318 13:50:05.617646 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.618096 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.618168 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.618075 1158806 retry.go:31] will retry after 234.69885ms: waiting for machine to come up
	I0318 13:50:05.854749 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.855365 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.855401 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.855310 1158806 retry.go:31] will retry after 324.015594ms: waiting for machine to come up
	I0318 13:50:06.181178 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.182089 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.182123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.182038 1158806 retry.go:31] will retry after 456.172304ms: waiting for machine to come up
	I0318 13:50:06.639827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.640288 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.640349 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.640244 1158806 retry.go:31] will retry after 561.082549ms: waiting for machine to come up
	I0318 13:50:07.203208 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.203798 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.203825 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.203696 1158806 retry.go:31] will retry after 633.905437ms: waiting for machine to come up
	I0318 13:50:07.839205 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.839760 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.839792 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.839698 1158806 retry.go:31] will retry after 629.254629ms: waiting for machine to come up
	I0318 13:50:08.470625 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:08.471073 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:08.471105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:08.471021 1158806 retry.go:31] will retry after 771.526268ms: waiting for machine to come up
	I0318 13:50:06.709604 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:09.208197 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:08.056220 1157887 crio.go:444] duration metric: took 2.072501191s to copy over tarball
	I0318 13:50:08.056361 1157887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:10.763501 1157887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.707101479s)
	I0318 13:50:10.763560 1157887 crio.go:451] duration metric: took 2.707303654s to extract the tarball
	I0318 13:50:10.763570 1157887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:10.808643 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:10.860178 1157887 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:10.860218 1157887 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:10.860229 1157887 kubeadm.go:928] updating node { 192.168.61.3 8444 v1.28.4 crio true true} ...
	I0318 13:50:10.860381 1157887 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-569210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:10.860455 1157887 ssh_runner.go:195] Run: crio config
	I0318 13:50:10.918077 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:10.918109 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:10.918124 1157887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:10.918154 1157887 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-569210 NodeName:default-k8s-diff-port-569210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:10.918362 1157887 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-569210"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:10.918457 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:10.930573 1157887 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:10.930639 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:10.941181 1157887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0318 13:50:10.960048 1157887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:10.980367 1157887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0318 13:50:11.001607 1157887 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:11.006363 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:11.020871 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:11.164152 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:11.185025 1157887 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210 for IP: 192.168.61.3
	I0318 13:50:11.185060 1157887 certs.go:194] generating shared ca certs ...
	I0318 13:50:11.185096 1157887 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:11.185277 1157887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:11.185342 1157887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:11.185356 1157887 certs.go:256] generating profile certs ...
	I0318 13:50:11.185464 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/client.key
	I0318 13:50:11.185541 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key.e15332a5
	I0318 13:50:11.185590 1157887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key
	I0318 13:50:11.185757 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:11.185799 1157887 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:11.185812 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:11.185841 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:11.185899 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:11.185945 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:11.185999 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:11.186853 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:11.221967 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:11.250180 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:11.287449 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:11.323521 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 13:50:11.360286 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:11.396947 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:11.426116 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:50:11.455183 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:11.483479 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:11.512975 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:11.548393 1157887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:11.569155 1157887 ssh_runner.go:195] Run: openssl version
	I0318 13:50:11.576084 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:11.589110 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594640 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594736 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.601473 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:11.615874 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:11.630380 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635808 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635886 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.644465 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:11.661509 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:08.364243 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:08.863539 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.364037 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.863422 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.363353 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.863485 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.363548 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.864070 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.243731 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:09.244146 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:09.244180 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:09.244104 1158806 retry.go:31] will retry after 1.160252016s: waiting for machine to come up
	I0318 13:50:10.405805 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:10.406270 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:10.406310 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:10.406201 1158806 retry.go:31] will retry after 1.625913099s: waiting for machine to come up
	I0318 13:50:12.033202 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:12.033674 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:12.033712 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:12.033589 1158806 retry.go:31] will retry after 1.835793865s: waiting for machine to come up
	I0318 13:50:11.211241 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:13.710211 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:11.675340 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938009 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938089 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.944766 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:11.957959 1157887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:11.963524 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:11.971678 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:11.978601 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:11.985403 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:11.992159 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:11.998620 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:12.005209 1157887 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:12.005300 1157887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:12.005350 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.074518 1157887 cri.go:89] found id: ""
	I0318 13:50:12.074603 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:12.099031 1157887 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:12.099062 1157887 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:12.099070 1157887 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:12.099147 1157887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:12.111133 1157887 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:12.112779 1157887 kubeconfig.go:125] found "default-k8s-diff-port-569210" server: "https://192.168.61.3:8444"
	I0318 13:50:12.116521 1157887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:12.134902 1157887 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.3
	I0318 13:50:12.134964 1157887 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:12.135005 1157887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:12.135086 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.190100 1157887 cri.go:89] found id: ""
	I0318 13:50:12.190182 1157887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:12.211556 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:12.223095 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:12.223120 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:12.223173 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:50:12.235709 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:12.235780 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:12.248896 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:50:12.260212 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:12.260285 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:12.271424 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.283083 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:12.283143 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.294877 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:50:12.305629 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:12.305692 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:12.317395 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:12.328943 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:12.471901 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.400723 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.601149 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.677768 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.796413 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:13.796558 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.297639 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.797236 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.885767 1157887 api_server.go:72] duration metric: took 1.089353166s to wait for apiserver process to appear ...
	I0318 13:50:14.885801 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:14.885827 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:14.886464 1157887 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0318 13:50:15.386913 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:13.364111 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.863871 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.363958 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.863570 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.364185 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.863974 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.364010 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.863484 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.864149 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.871003 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:13.871443 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:13.871475 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:13.871398 1158806 retry.go:31] will retry after 2.53403994s: waiting for machine to come up
	I0318 13:50:16.407271 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:16.407728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:16.407775 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:16.407708 1158806 retry.go:31] will retry after 2.371916928s: waiting for machine to come up
	I0318 13:50:18.781468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:18.781866 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:18.781898 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:18.781809 1158806 retry.go:31] will retry after 3.250042198s: waiting for machine to come up
	I0318 13:50:17.204788 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.204828 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.204848 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.235957 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.236000 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.386349 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.393185 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.393220 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:17.886583 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.892087 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.892122 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.386820 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.406609 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:18.406658 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.886458 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.896097 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:50:18.905565 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:18.905603 1157887 api_server.go:131] duration metric: took 4.019792975s to wait for apiserver health ...
	I0318 13:50:18.905615 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:18.905624 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:18.907258 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:15.711910 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.209648 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.909133 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:18.944457 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:18.973831 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:18.984400 1157887 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:18.984436 1157887 system_pods.go:61] "coredns-5dd5756b68-hwsz5" [0a91f20c-3d3b-415c-b709-7898c606d830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:18.984447 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [64925324-9666-49ab-b849-ad9b7ce54891] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:18.984456 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [8409a63f-fbac-4bf9-b54b-5ac267a58206] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:18.984465 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [a2d7b983-c4aa-4c32-9391-babe90b0f102] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:18.984470 1157887 system_pods.go:61] "kube-proxy-v59ks" [39a4e73c-319d-4093-8781-ca7a1a48e005] Running
	I0318 13:50:18.984477 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [f24baa89-e33d-42ca-8f83-17c76a4cedcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:18.984488 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-2sb4m" [f3e533a7-9666-4b79-b9a9-26222422f242] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:18.984496 1157887 system_pods.go:61] "storage-provisioner" [864d0bb2-cbca-41ae-b9ec-89aced62dd08] Running
	I0318 13:50:18.984505 1157887 system_pods.go:74] duration metric: took 10.646849ms to wait for pod list to return data ...
	I0318 13:50:18.984519 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:18.989173 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:18.989201 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:18.989213 1157887 node_conditions.go:105] duration metric: took 4.685756ms to run NodePressure ...
	I0318 13:50:18.989231 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:19.229166 1157887 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237757 1157887 kubeadm.go:733] kubelet initialised
	I0318 13:50:19.237787 1157887 kubeadm.go:734] duration metric: took 8.591388ms waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237797 1157887 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:19.243530 1157887 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.253925 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253957 1157887 pod_ready.go:81] duration metric: took 10.403116ms for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.253969 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253978 1157887 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.265167 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265189 1157887 pod_ready.go:81] duration metric: took 11.202545ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.265200 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265206 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.273558 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273589 1157887 pod_ready.go:81] duration metric: took 8.37478ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.273603 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273615 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:21.280970 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.363366 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:18.863782 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.363987 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.863437 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.364050 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.863961 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.364126 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.863264 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.363519 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.033540 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:22.034056 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:22.034084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:22.034001 1158806 retry.go:31] will retry after 5.297432528s: waiting for machine to come up
	I0318 13:50:20.708189 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:22.708573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:24.708632 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.281625 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:25.780754 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.364019 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:23.864134 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.363510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.863263 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.364027 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.863203 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.364219 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.863262 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.363889 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.864113 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.335390 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335875 1157263 main.go:141] libmachine: (embed-certs-173036) Found IP for machine: 192.168.50.191
	I0318 13:50:27.335908 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has current primary IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335918 1157263 main.go:141] libmachine: (embed-certs-173036) Reserving static IP address...
	I0318 13:50:27.336311 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.336360 1157263 main.go:141] libmachine: (embed-certs-173036) Reserved static IP address: 192.168.50.191
	I0318 13:50:27.336380 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | skip adding static IP to network mk-embed-certs-173036 - found existing host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"}
	I0318 13:50:27.336394 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Getting to WaitForSSH function...
	I0318 13:50:27.336406 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting for SSH to be available...
	I0318 13:50:27.338627 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.338948 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.338983 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.339087 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH client type: external
	I0318 13:50:27.339177 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa (-rw-------)
	I0318 13:50:27.339212 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:27.339227 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | About to run SSH command:
	I0318 13:50:27.339244 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | exit 0
	I0318 13:50:27.468468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:27.468936 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetConfigRaw
	I0318 13:50:27.469699 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.472098 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472422 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.472446 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472714 1157263 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/config.json ...
	I0318 13:50:27.472955 1157263 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:27.472982 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:27.473196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.475516 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.475808 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.475831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.476041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.476252 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476414 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476537 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.476719 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.476899 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.476909 1157263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:27.589501 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:27.589532 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.589828 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:50:27.589862 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.590068 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.592650 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593005 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.593035 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593186 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.593375 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593546 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593713 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.593883 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.594058 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.594073 1157263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-173036 && echo "embed-certs-173036" | sudo tee /etc/hostname
	I0318 13:50:27.730406 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173036
	
	I0318 13:50:27.730437 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.733420 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.733857 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.733890 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.734058 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.734271 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734475 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734609 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.734764 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.734943 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.734960 1157263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-173036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-173036/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-173036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:27.860625 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:27.860679 1157263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:27.860777 1157263 buildroot.go:174] setting up certificates
	I0318 13:50:27.860790 1157263 provision.go:84] configureAuth start
	I0318 13:50:27.860810 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.861112 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.864215 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864667 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.864703 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864956 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.867381 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867690 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.867730 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867893 1157263 provision.go:143] copyHostCerts
	I0318 13:50:27.867963 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:27.867977 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:27.868048 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:27.868183 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:27.868198 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:27.868231 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:27.868307 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:27.868318 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:27.868372 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:27.868451 1157263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.embed-certs-173036 san=[127.0.0.1 192.168.50.191 embed-certs-173036 localhost minikube]
	I0318 13:50:28.001671 1157263 provision.go:177] copyRemoteCerts
	I0318 13:50:28.001742 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:28.001773 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.004389 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.004746 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.004777 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.005021 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.005214 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.005393 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.005558 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.095871 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 13:50:28.127356 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:28.157301 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:28.186185 1157263 provision.go:87] duration metric: took 325.374328ms to configureAuth
	I0318 13:50:28.186217 1157263 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:28.186424 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:28.186529 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.189135 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189532 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.189564 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189719 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.189933 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190127 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.190492 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.190654 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.190668 1157263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:28.473836 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:28.473875 1157263 machine.go:97] duration metric: took 1.000902962s to provisionDockerMachine
	I0318 13:50:28.473887 1157263 start.go:293] postStartSetup for "embed-certs-173036" (driver="kvm2")
	I0318 13:50:28.473898 1157263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:28.473914 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.474270 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:28.474307 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.477165 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477571 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.477619 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477756 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.477966 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.478135 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.478296 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.568988 1157263 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:28.573759 1157263 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:28.573782 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:28.573839 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:28.573909 1157263 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:28.573989 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:28.584049 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:28.610999 1157263 start.go:296] duration metric: took 137.09711ms for postStartSetup
	I0318 13:50:28.611043 1157263 fix.go:56] duration metric: took 24.300980779s for fixHost
	I0318 13:50:28.611066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.614123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614582 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.614628 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614795 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.614999 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615124 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615255 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.615427 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.615617 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.615631 1157263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:28.729856 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769828.678644307
	
	I0318 13:50:28.729894 1157263 fix.go:216] guest clock: 1710769828.678644307
	I0318 13:50:28.729913 1157263 fix.go:229] Guest: 2024-03-18 13:50:28.678644307 +0000 UTC Remote: 2024-03-18 13:50:28.611048079 +0000 UTC m=+364.845703282 (delta=67.596228ms)
	I0318 13:50:28.729932 1157263 fix.go:200] guest clock delta is within tolerance: 67.596228ms
	I0318 13:50:28.729937 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 24.419922158s
	I0318 13:50:28.729958 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.730241 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:28.732831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733196 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.733249 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733406 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.733875 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734172 1157263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:28.734248 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.734330 1157263 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:28.734376 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.737014 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737200 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737444 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737470 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737611 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737694 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737721 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737918 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737926 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738195 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738292 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.738357 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738466 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:26.708824 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.209974 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:28.818695 1157263 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:28.844173 1157263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:28.995017 1157263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:29.002150 1157263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:29.002251 1157263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:29.021165 1157263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:29.021200 1157263 start.go:494] detecting cgroup driver to use...
	I0318 13:50:29.021286 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:29.039060 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:29.053451 1157263 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:29.053521 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:29.069721 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:29.085285 1157263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:29.201273 1157263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:29.356314 1157263 docker.go:233] disabling docker service ...
	I0318 13:50:29.356406 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:29.374159 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:29.390280 1157263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:29.542126 1157263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:29.692068 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:29.707760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:29.735684 1157263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:29.735753 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.751291 1157263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:29.751365 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.763159 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.774837 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.787142 1157263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:29.799773 1157263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:29.810620 1157263 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:29.810691 1157263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:29.826816 1157263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:29.842059 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:29.985531 1157263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:30.147122 1157263 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:30.147191 1157263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:30.152406 1157263 start.go:562] Will wait 60s for crictl version
	I0318 13:50:30.152468 1157263 ssh_runner.go:195] Run: which crictl
	I0318 13:50:30.157019 1157263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:30.199810 1157263 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:30.199889 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.232028 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.270484 1157263 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:27.781584 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.795969 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:31.282868 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.282899 1157887 pod_ready.go:81] duration metric: took 12.009270978s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.282910 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290886 1157887 pod_ready.go:92] pod "kube-proxy-v59ks" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.290917 1157887 pod_ready.go:81] duration metric: took 7.99936ms for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290931 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300197 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.300235 1157887 pod_ready.go:81] duration metric: took 9.294232ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300254 1157887 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:28.364069 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:28.863405 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.363996 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.863574 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.363749 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.863564 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.363250 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.863320 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.363894 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.864166 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.271939 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:30.275084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.275682 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:30.275728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.276045 1157263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:30.282421 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:30.299013 1157263 kubeadm.go:877] updating cluster {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:30.299280 1157263 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:30.299364 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:30.349617 1157263 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:30.349720 1157263 ssh_runner.go:195] Run: which lz4
	I0318 13:50:30.354659 1157263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:30.359861 1157263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:30.359903 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:32.362707 1157263 crio.go:444] duration metric: took 2.008087158s to copy over tarball
	I0318 13:50:32.362796 1157263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:31.210766 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.709661 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.308081 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:35.309291 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:33.864021 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.363963 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.864011 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.364122 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.863559 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.364154 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.364232 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.863934 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.265803 1157263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.902966349s)
	I0318 13:50:35.265827 1157263 crio.go:451] duration metric: took 2.903086385s to extract the tarball
	I0318 13:50:35.265835 1157263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:35.313875 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:35.378361 1157263 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:35.378392 1157263 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:35.378408 1157263 kubeadm.go:928] updating node { 192.168.50.191 8443 v1.28.4 crio true true} ...
	I0318 13:50:35.378551 1157263 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-173036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:35.378648 1157263 ssh_runner.go:195] Run: crio config
	I0318 13:50:35.443472 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:35.443501 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:35.443520 1157263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:35.443551 1157263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.191 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-173036 NodeName:embed-certs-173036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:35.443730 1157263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-173036"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:35.443809 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:35.455284 1157263 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:35.455352 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:35.465886 1157263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 13:50:35.487345 1157263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:35.507361 1157263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 13:50:35.528055 1157263 ssh_runner.go:195] Run: grep 192.168.50.191	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:35.533287 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:35.548295 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:35.684165 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:35.703884 1157263 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036 for IP: 192.168.50.191
	I0318 13:50:35.703910 1157263 certs.go:194] generating shared ca certs ...
	I0318 13:50:35.703927 1157263 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:35.704117 1157263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:35.704186 1157263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:35.704200 1157263 certs.go:256] generating profile certs ...
	I0318 13:50:35.704292 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/client.key
	I0318 13:50:35.704406 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key.527b6b30
	I0318 13:50:35.704472 1157263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key
	I0318 13:50:35.704637 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:35.704680 1157263 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:35.704694 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:35.704729 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:35.704763 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:35.704796 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:35.704857 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:35.705836 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:35.768912 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:35.830564 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:35.877813 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:35.916756 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 13:50:35.948397 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:35.980450 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:36.009626 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:50:36.040155 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:36.068885 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:36.098638 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:36.128423 1157263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:36.149584 1157263 ssh_runner.go:195] Run: openssl version
	I0318 13:50:36.156347 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:36.169729 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175367 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175438 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.181995 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:36.193987 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:36.206444 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212355 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212442 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.219042 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:36.231882 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:36.244590 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250443 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250511 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.257713 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:36.271026 1157263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:36.276902 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:36.285465 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:36.294274 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:36.302415 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:36.310867 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:36.318931 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:36.327627 1157263 kubeadm.go:391] StartCluster: {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:36.327781 1157263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:36.327843 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.376644 1157263 cri.go:89] found id: ""
	I0318 13:50:36.376741 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:36.389506 1157263 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:36.389528 1157263 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:36.389533 1157263 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:36.389640 1157263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:36.401386 1157263 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:36.402631 1157263 kubeconfig.go:125] found "embed-certs-173036" server: "https://192.168.50.191:8443"
	I0318 13:50:36.404833 1157263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:36.416975 1157263 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.191
	I0318 13:50:36.417026 1157263 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:36.417041 1157263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:36.417106 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.458072 1157263 cri.go:89] found id: ""
	I0318 13:50:36.458162 1157263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:36.476557 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:36.487765 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:36.487791 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:36.487857 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:50:36.498903 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:36.498982 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:36.510205 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:50:36.520423 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:36.520476 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:36.531864 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.542058 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:36.542131 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.552807 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:50:36.562840 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:36.562915 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:36.573581 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:36.583760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:36.719884 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.681007 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.914386 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.993967 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:38.101144 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:38.101261 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.602138 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.711725 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:37.807508 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:39.809153 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.363994 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.863278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.363665 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.863948 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.364081 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.864124 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.363964 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.863593 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.363750 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.864002 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.102040 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.212769 1157263 api_server.go:72] duration metric: took 1.111626123s to wait for apiserver process to appear ...
	I0318 13:50:39.212807 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:39.212840 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:39.213446 1157263 api_server.go:269] stopped: https://192.168.50.191:8443/healthz: Get "https://192.168.50.191:8443/healthz": dial tcp 192.168.50.191:8443: connect: connection refused
	I0318 13:50:39.713482 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.646306 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.646352 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.646370 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.691920 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.691953 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.713082 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.770065 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:42.770101 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.213524 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.224669 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.224710 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.712987 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.718490 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.718533 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:44.213026 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:44.217876 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:50:44.225562 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:44.225588 1157263 api_server.go:131] duration metric: took 5.012774227s to wait for apiserver health ...
	I0318 13:50:44.225610 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:44.225618 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:44.227565 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:40.210029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:42.210435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:44.710674 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:41.811414 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.818645 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:46.308757 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.364189 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:43.863868 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.363454 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.863940 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.363913 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.863288 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.363884 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.863361 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.363383 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.864064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.229055 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:44.260389 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:44.310001 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:44.327281 1157263 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:44.327330 1157263 system_pods.go:61] "coredns-5dd5756b68-zsfvm" [1404c3fe-6538-4aaf-80f5-599275240731] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:44.327342 1157263 system_pods.go:61] "etcd-embed-certs-173036" [254a577c-bd3b-4645-9c92-1479b0c6d0c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:44.327354 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [5a738280-05ba-413e-a288-4c4d07ddbd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:44.327362 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [f48cfb7f-1efe-4941-b328-2358c7a5cced] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:44.327369 1157263 system_pods.go:61] "kube-proxy-xqf68" [969de4e5-fc60-4d46-b336-49f22a9b6c38] Running
	I0318 13:50:44.327376 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [e0579c16-de3e-4915-9ed2-f69b53f6f884] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:44.327385 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-5cv2z" [85649bfb-f91f-4bfe-9356-d540ac3d6a68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:44.327392 1157263 system_pods.go:61] "storage-provisioner" [0c1ec131-0f6c-4e01-aaec-5011f1a4fe75] Running
	I0318 13:50:44.327410 1157263 system_pods.go:74] duration metric: took 17.376754ms to wait for pod list to return data ...
	I0318 13:50:44.327423 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:44.332965 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:44.332997 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:44.333008 1157263 node_conditions.go:105] duration metric: took 5.580934ms to run NodePressure ...
	I0318 13:50:44.333027 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:44.573923 1157263 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578504 1157263 kubeadm.go:733] kubelet initialised
	I0318 13:50:44.578526 1157263 kubeadm.go:734] duration metric: took 4.577181ms waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578534 1157263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:44.584361 1157263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.591714 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591739 1157263 pod_ready.go:81] duration metric: took 7.35191ms for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.591746 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591753 1157263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.597618 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597641 1157263 pod_ready.go:81] duration metric: took 5.880276ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.597649 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597655 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.604124 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604148 1157263 pod_ready.go:81] duration metric: took 6.484251ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.604157 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604164 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:46.611326 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:47.209538 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:49.708718 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.309157 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.808340 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.363218 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:48.864086 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.363457 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.863292 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.363308 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.863428 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.363583 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.863562 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.363995 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.863463 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.111834 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.114329 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.114356 1157263 pod_ready.go:81] duration metric: took 5.510175425s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.114369 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133169 1157263 pod_ready.go:92] pod "kube-proxy-xqf68" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.133196 1157263 pod_ready.go:81] duration metric: took 18.819059ms for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133208 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:52.144639 1157263 pod_ready.go:102] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:51.709823 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:54.207738 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.311033 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:55.311439 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.363919 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:53.863936 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.363671 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.863567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:54.863709 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:54.911905 1157708 cri.go:89] found id: ""
	I0318 13:50:54.911942 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.911954 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:54.911962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:54.912031 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:54.962141 1157708 cri.go:89] found id: ""
	I0318 13:50:54.962170 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.962182 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:54.962188 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:54.962269 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:55.001597 1157708 cri.go:89] found id: ""
	I0318 13:50:55.001639 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.001652 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:55.001660 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:55.001725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:55.042660 1157708 cri.go:89] found id: ""
	I0318 13:50:55.042695 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.042708 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:55.042716 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:55.042775 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:55.082095 1157708 cri.go:89] found id: ""
	I0318 13:50:55.082128 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.082139 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:55.082146 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:55.082211 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:55.120938 1157708 cri.go:89] found id: ""
	I0318 13:50:55.120969 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.121000 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:55.121008 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:55.121081 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:55.159247 1157708 cri.go:89] found id: ""
	I0318 13:50:55.159280 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.159292 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:55.159300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:55.159366 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:55.200130 1157708 cri.go:89] found id: ""
	I0318 13:50:55.200161 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.200170 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:55.200180 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:55.200193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:55.254113 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:55.254154 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:55.268984 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:55.269027 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:55.402079 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:55.402106 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:55.402123 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:55.468627 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:55.468674 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:54.143220 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:54.143247 1157263 pod_ready.go:81] duration metric: took 4.010031997s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:54.143258 1157263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:56.151615 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.650293 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:56.208339 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.209144 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:57.810894 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.308972 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.016860 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:58.031684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:58.031747 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:58.073389 1157708 cri.go:89] found id: ""
	I0318 13:50:58.073415 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.073427 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:58.073434 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:58.073497 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:58.114439 1157708 cri.go:89] found id: ""
	I0318 13:50:58.114471 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.114483 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:58.114490 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:58.114553 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:58.165440 1157708 cri.go:89] found id: ""
	I0318 13:50:58.165466 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.165476 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:58.165484 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:58.165569 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:58.207083 1157708 cri.go:89] found id: ""
	I0318 13:50:58.207117 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.207129 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:58.207137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:58.207227 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:58.252945 1157708 cri.go:89] found id: ""
	I0318 13:50:58.252973 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.252985 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:58.252993 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:58.253055 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:58.292437 1157708 cri.go:89] found id: ""
	I0318 13:50:58.292464 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.292474 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:58.292480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:58.292530 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:58.335359 1157708 cri.go:89] found id: ""
	I0318 13:50:58.335403 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.335415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:58.335423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:58.335511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:58.381434 1157708 cri.go:89] found id: ""
	I0318 13:50:58.381473 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.381484 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:58.381494 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:58.381511 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:58.432270 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:58.432319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:58.447658 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:58.447686 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:58.523163 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:58.523186 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:58.523207 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:58.599544 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:58.599586 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:01.141653 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:01.156996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:01.157070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:01.192720 1157708 cri.go:89] found id: ""
	I0318 13:51:01.192762 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.192775 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:01.192785 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:01.192866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:01.232678 1157708 cri.go:89] found id: ""
	I0318 13:51:01.232705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.232716 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:01.232723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:01.232795 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:01.270637 1157708 cri.go:89] found id: ""
	I0318 13:51:01.270666 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.270676 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:01.270684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:01.270746 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:01.308891 1157708 cri.go:89] found id: ""
	I0318 13:51:01.308921 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.308931 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:01.308939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:01.309003 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:01.349301 1157708 cri.go:89] found id: ""
	I0318 13:51:01.349334 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.349346 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:01.349354 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:01.349420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:01.394010 1157708 cri.go:89] found id: ""
	I0318 13:51:01.394039 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.394047 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:01.394053 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:01.394103 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:01.432778 1157708 cri.go:89] found id: ""
	I0318 13:51:01.432804 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.432815 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:01.432823 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:01.432886 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:01.471974 1157708 cri.go:89] found id: ""
	I0318 13:51:01.472002 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.472011 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:01.472022 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:01.472040 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:01.524855 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:01.524893 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:01.540939 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:01.540967 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:01.618318 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:01.618350 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:01.618367 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:01.695717 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:01.695755 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:00.650906 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.651512 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.211620 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.708336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.312320 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.808301 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.241781 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:04.256276 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:04.256373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:04.297129 1157708 cri.go:89] found id: ""
	I0318 13:51:04.297158 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.297170 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:04.297179 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:04.297247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:04.341743 1157708 cri.go:89] found id: ""
	I0318 13:51:04.341774 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.341786 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:04.341793 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:04.341858 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:04.384400 1157708 cri.go:89] found id: ""
	I0318 13:51:04.384434 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.384445 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:04.384453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:04.384510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:04.425459 1157708 cri.go:89] found id: ""
	I0318 13:51:04.425487 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.425500 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:04.425510 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:04.425563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:04.463091 1157708 cri.go:89] found id: ""
	I0318 13:51:04.463125 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.463137 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:04.463145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:04.463210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:04.503023 1157708 cri.go:89] found id: ""
	I0318 13:51:04.503057 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.503069 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:04.503077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:04.503141 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:04.542083 1157708 cri.go:89] found id: ""
	I0318 13:51:04.542116 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.542127 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:04.542136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:04.542207 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:04.583097 1157708 cri.go:89] found id: ""
	I0318 13:51:04.583128 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.583137 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:04.583146 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:04.583161 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:04.650476 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:04.650518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:04.706073 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:04.706111 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:04.723595 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:04.723628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:04.800278 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:04.800301 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:04.800316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:07.388144 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:07.403636 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:07.403711 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:07.443337 1157708 cri.go:89] found id: ""
	I0318 13:51:07.443365 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.443379 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:07.443386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:07.443442 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:07.482417 1157708 cri.go:89] found id: ""
	I0318 13:51:07.482453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.482462 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:07.482469 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:07.482521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:07.518445 1157708 cri.go:89] found id: ""
	I0318 13:51:07.518474 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.518485 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:07.518493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:07.518563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:07.555628 1157708 cri.go:89] found id: ""
	I0318 13:51:07.555661 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.555673 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:07.555681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:07.555760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:07.593805 1157708 cri.go:89] found id: ""
	I0318 13:51:07.593842 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.593856 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:07.593873 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:07.593936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:07.638206 1157708 cri.go:89] found id: ""
	I0318 13:51:07.638234 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.638242 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:07.638249 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:07.638313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:07.679526 1157708 cri.go:89] found id: ""
	I0318 13:51:07.679561 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.679573 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:07.679581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:07.679635 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:07.724468 1157708 cri.go:89] found id: ""
	I0318 13:51:07.724494 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.724504 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:07.724516 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:07.724533 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:07.766491 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:07.766522 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:07.823782 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:07.823833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:07.839316 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:07.839342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:07.924790 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:07.924821 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:07.924841 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:05.151629 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.651485 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:05.210455 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.709381 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.310000 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:09.808337 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.513618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:10.528711 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:10.528790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:10.571217 1157708 cri.go:89] found id: ""
	I0318 13:51:10.571254 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.571267 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:10.571275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:10.571335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:10.608096 1157708 cri.go:89] found id: ""
	I0318 13:51:10.608129 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.608140 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:10.608149 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:10.608217 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:10.649245 1157708 cri.go:89] found id: ""
	I0318 13:51:10.649274 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.649283 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:10.649290 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:10.649365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:10.693462 1157708 cri.go:89] found id: ""
	I0318 13:51:10.693495 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.693506 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:10.693515 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:10.693589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:10.740434 1157708 cri.go:89] found id: ""
	I0318 13:51:10.740464 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.740474 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:10.740480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:10.740543 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:10.781062 1157708 cri.go:89] found id: ""
	I0318 13:51:10.781099 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.781108 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:10.781114 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:10.781167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:10.828480 1157708 cri.go:89] found id: ""
	I0318 13:51:10.828513 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.828524 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:10.828532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:10.828605 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:10.868508 1157708 cri.go:89] found id: ""
	I0318 13:51:10.868535 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.868543 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:10.868553 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:10.868565 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:10.923925 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:10.923961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:10.939254 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:10.939283 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:11.031307 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:11.031334 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:11.031351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:11.121563 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:11.121618 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:10.151278 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.650083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.209877 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.709070 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.308084 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:14.309651 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:16.312985 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:13.681147 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:13.696705 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:13.696812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:13.740904 1157708 cri.go:89] found id: ""
	I0318 13:51:13.740937 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.740949 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:13.740957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:13.741038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:13.779625 1157708 cri.go:89] found id: ""
	I0318 13:51:13.779659 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.779672 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:13.779681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:13.779762 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:13.822183 1157708 cri.go:89] found id: ""
	I0318 13:51:13.822218 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.822231 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:13.822239 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:13.822302 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:13.873686 1157708 cri.go:89] found id: ""
	I0318 13:51:13.873728 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.873741 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:13.873749 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:13.873821 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:13.919772 1157708 cri.go:89] found id: ""
	I0318 13:51:13.919802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.919811 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:13.919817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:13.919874 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:13.958809 1157708 cri.go:89] found id: ""
	I0318 13:51:13.958837 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.958846 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:13.958852 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:13.958928 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:14.000537 1157708 cri.go:89] found id: ""
	I0318 13:51:14.000568 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.000580 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:14.000588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:14.000638 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:14.041234 1157708 cri.go:89] found id: ""
	I0318 13:51:14.041265 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.041275 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:14.041285 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:14.041299 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:14.085435 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:14.085462 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:14.144336 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:14.144374 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:14.159972 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:14.160000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:14.242027 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:14.242048 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:14.242061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:16.821805 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:16.840202 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:16.840272 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:16.898088 1157708 cri.go:89] found id: ""
	I0318 13:51:16.898120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.898129 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:16.898135 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:16.898203 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:16.953180 1157708 cri.go:89] found id: ""
	I0318 13:51:16.953209 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.953221 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:16.953229 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:16.953288 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:17.006995 1157708 cri.go:89] found id: ""
	I0318 13:51:17.007048 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.007062 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:17.007070 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:17.007136 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:17.049756 1157708 cri.go:89] found id: ""
	I0318 13:51:17.049798 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.049809 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:17.049817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:17.049885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:17.092026 1157708 cri.go:89] found id: ""
	I0318 13:51:17.092055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.092066 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:17.092074 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:17.092144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:17.137722 1157708 cri.go:89] found id: ""
	I0318 13:51:17.137756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.137769 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:17.137778 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:17.137875 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:17.180778 1157708 cri.go:89] found id: ""
	I0318 13:51:17.180808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.180816 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:17.180822 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:17.180885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:17.227629 1157708 cri.go:89] found id: ""
	I0318 13:51:17.227664 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.227675 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:17.227688 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:17.227706 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:17.272559 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:17.272588 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:17.333953 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:17.333994 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:17.349765 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:17.349793 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:17.434436 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:17.434465 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:17.434483 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:14.650201 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.151069 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:15.208570 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.210168 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:19.707753 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:18.808252 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.309389 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:20.014314 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:20.031106 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:20.031172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:20.067727 1157708 cri.go:89] found id: ""
	I0318 13:51:20.067753 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.067765 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:20.067773 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:20.067844 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:20.108455 1157708 cri.go:89] found id: ""
	I0318 13:51:20.108482 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.108491 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:20.108497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:20.108563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:20.152257 1157708 cri.go:89] found id: ""
	I0318 13:51:20.152285 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.152310 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:20.152317 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:20.152394 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:20.191480 1157708 cri.go:89] found id: ""
	I0318 13:51:20.191509 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.191520 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:20.191529 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:20.191599 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:20.235677 1157708 cri.go:89] found id: ""
	I0318 13:51:20.235705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.235716 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:20.235723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:20.235796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:20.274794 1157708 cri.go:89] found id: ""
	I0318 13:51:20.274822 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.274833 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:20.274842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:20.274907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:20.321987 1157708 cri.go:89] found id: ""
	I0318 13:51:20.322019 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.322031 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:20.322040 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:20.322097 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:20.361292 1157708 cri.go:89] found id: ""
	I0318 13:51:20.361319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.361328 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:20.361338 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:20.361360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:20.434481 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:20.434509 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:20.434527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:20.518203 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:20.518244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:20.560241 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:20.560271 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:20.615489 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:20.615526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:19.151244 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.151320 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.651849 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.708423 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:24.207976 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.310491 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:25.808443 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.132509 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:23.146447 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:23.146559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:23.189576 1157708 cri.go:89] found id: ""
	I0318 13:51:23.189613 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.189625 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:23.189634 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:23.189688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:23.229700 1157708 cri.go:89] found id: ""
	I0318 13:51:23.229731 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.229740 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:23.229747 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:23.229812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:23.272713 1157708 cri.go:89] found id: ""
	I0318 13:51:23.272747 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.272759 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:23.272768 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:23.272834 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:23.313988 1157708 cri.go:89] found id: ""
	I0318 13:51:23.314014 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.314022 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:23.314028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:23.314087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:23.360195 1157708 cri.go:89] found id: ""
	I0318 13:51:23.360230 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.360243 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:23.360251 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:23.360321 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:23.400657 1157708 cri.go:89] found id: ""
	I0318 13:51:23.400685 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.400694 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:23.400707 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:23.400760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:23.442841 1157708 cri.go:89] found id: ""
	I0318 13:51:23.442873 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.442893 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:23.442900 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:23.442970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:23.483467 1157708 cri.go:89] found id: ""
	I0318 13:51:23.483504 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.483516 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:23.483528 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:23.483545 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:23.538581 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:23.538616 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:23.555392 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:23.555421 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:23.634919 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:23.634945 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:23.634970 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:23.718098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:23.718144 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.270369 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:26.287165 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:26.287232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:26.331773 1157708 cri.go:89] found id: ""
	I0318 13:51:26.331807 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.331832 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:26.331850 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:26.331923 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:26.372067 1157708 cri.go:89] found id: ""
	I0318 13:51:26.372095 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.372102 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:26.372109 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:26.372182 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:26.411883 1157708 cri.go:89] found id: ""
	I0318 13:51:26.411910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.411919 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:26.411924 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:26.411980 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:26.449087 1157708 cri.go:89] found id: ""
	I0318 13:51:26.449122 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.449131 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:26.449137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:26.449188 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:26.492126 1157708 cri.go:89] found id: ""
	I0318 13:51:26.492162 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.492174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:26.492182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:26.492251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:26.529621 1157708 cri.go:89] found id: ""
	I0318 13:51:26.529656 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.529668 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:26.529677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:26.529764 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:26.568853 1157708 cri.go:89] found id: ""
	I0318 13:51:26.568888 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.568899 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:26.568907 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:26.568979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:26.607882 1157708 cri.go:89] found id: ""
	I0318 13:51:26.607917 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.607929 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:26.607942 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:26.607959 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.648736 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:26.648768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:26.704641 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:26.704684 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:26.720681 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:26.720715 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:26.799577 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:26.799608 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:26.799627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:26.152083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.651445 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:26.208160 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.708468 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.309859 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.806690 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:29.389391 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:29.404122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:29.404195 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:29.446761 1157708 cri.go:89] found id: ""
	I0318 13:51:29.446787 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.446796 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:29.446803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:29.446857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:29.483974 1157708 cri.go:89] found id: ""
	I0318 13:51:29.484007 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.484020 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:29.484028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:29.484099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:29.521894 1157708 cri.go:89] found id: ""
	I0318 13:51:29.521922 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.521931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:29.521937 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:29.521993 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:29.562918 1157708 cri.go:89] found id: ""
	I0318 13:51:29.562948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.562957 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:29.562963 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:29.563017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:29.600372 1157708 cri.go:89] found id: ""
	I0318 13:51:29.600412 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.600424 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:29.600432 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:29.600500 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:29.638902 1157708 cri.go:89] found id: ""
	I0318 13:51:29.638933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.638945 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:29.638953 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:29.639019 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:29.679041 1157708 cri.go:89] found id: ""
	I0318 13:51:29.679071 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.679079 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:29.679085 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:29.679142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:29.719168 1157708 cri.go:89] found id: ""
	I0318 13:51:29.719201 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.719213 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:29.719224 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:29.719244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:29.764050 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:29.764077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:29.822136 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:29.822174 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:29.839485 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:29.839515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:29.914984 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:29.915006 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:29.915023 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:32.497388 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:32.512151 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:32.512215 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:32.549566 1157708 cri.go:89] found id: ""
	I0318 13:51:32.549602 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.549614 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:32.549623 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:32.549693 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:32.588516 1157708 cri.go:89] found id: ""
	I0318 13:51:32.588546 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.588555 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:32.588562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:32.588615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:32.628425 1157708 cri.go:89] found id: ""
	I0318 13:51:32.628453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.628462 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:32.628470 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:32.628546 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:32.670851 1157708 cri.go:89] found id: ""
	I0318 13:51:32.670874 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.670888 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:32.670895 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:32.670944 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:32.709614 1157708 cri.go:89] found id: ""
	I0318 13:51:32.709642 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.709656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:32.709666 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:32.709738 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:32.749774 1157708 cri.go:89] found id: ""
	I0318 13:51:32.749808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.749819 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:32.749828 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:32.749896 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:32.789502 1157708 cri.go:89] found id: ""
	I0318 13:51:32.789525 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.789534 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:32.789540 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:32.789589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:32.834926 1157708 cri.go:89] found id: ""
	I0318 13:51:32.834948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.834956 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:32.834965 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:32.834980 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:32.887365 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:32.887404 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:32.903584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:32.903610 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:32.978924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:32.978958 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:32.978988 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:31.151276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.651395 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.709136 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.709549 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.808076 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.308827 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.055386 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:33.055424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:35.603881 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:35.618083 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:35.618167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:35.659760 1157708 cri.go:89] found id: ""
	I0318 13:51:35.659802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.659814 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:35.659820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:35.659881 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:35.703521 1157708 cri.go:89] found id: ""
	I0318 13:51:35.703570 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.703582 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:35.703589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:35.703651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:35.744411 1157708 cri.go:89] found id: ""
	I0318 13:51:35.744444 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.744455 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:35.744463 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:35.744548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:35.783704 1157708 cri.go:89] found id: ""
	I0318 13:51:35.783735 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.783746 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:35.783754 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:35.783819 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:35.824000 1157708 cri.go:89] found id: ""
	I0318 13:51:35.824031 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.824042 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:35.824049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:35.824117 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:35.860260 1157708 cri.go:89] found id: ""
	I0318 13:51:35.860289 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.860299 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:35.860308 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:35.860388 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:35.895154 1157708 cri.go:89] found id: ""
	I0318 13:51:35.895189 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.895201 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:35.895209 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:35.895276 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:35.936916 1157708 cri.go:89] found id: ""
	I0318 13:51:35.936942 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.936951 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:35.936961 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:35.936977 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:35.951715 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:35.951745 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:36.027431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:36.027457 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:36.027474 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:36.113339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:36.113386 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:36.160132 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:36.160170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:36.151331 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.650891 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.208500 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.209692 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.709776 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.807423 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.809226 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.711710 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:38.726104 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:38.726162 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:38.763251 1157708 cri.go:89] found id: ""
	I0318 13:51:38.763281 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.763291 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:38.763300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:38.763364 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:38.802521 1157708 cri.go:89] found id: ""
	I0318 13:51:38.802548 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.802556 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:38.802562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:38.802616 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:38.843778 1157708 cri.go:89] found id: ""
	I0318 13:51:38.843817 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.843831 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:38.843839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:38.843909 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:38.884966 1157708 cri.go:89] found id: ""
	I0318 13:51:38.885003 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.885015 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:38.885024 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:38.885090 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:38.925653 1157708 cri.go:89] found id: ""
	I0318 13:51:38.925681 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.925690 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:38.925696 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:38.925757 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:38.964126 1157708 cri.go:89] found id: ""
	I0318 13:51:38.964156 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.964169 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:38.964177 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:38.964228 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:39.004864 1157708 cri.go:89] found id: ""
	I0318 13:51:39.004898 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.004910 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:39.004919 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:39.004991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:39.041555 1157708 cri.go:89] found id: ""
	I0318 13:51:39.041588 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.041600 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:39.041611 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:39.041626 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:39.092984 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:39.093019 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:39.110492 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:39.110526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:39.186785 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:39.186848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:39.186872 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:39.272847 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:39.272891 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:41.829404 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:41.843407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:41.843479 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:41.883129 1157708 cri.go:89] found id: ""
	I0318 13:51:41.883164 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.883175 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:41.883184 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:41.883246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:41.924083 1157708 cri.go:89] found id: ""
	I0318 13:51:41.924123 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.924136 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:41.924144 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:41.924209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:41.963029 1157708 cri.go:89] found id: ""
	I0318 13:51:41.963058 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.963069 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:41.963084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:41.963155 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:42.003393 1157708 cri.go:89] found id: ""
	I0318 13:51:42.003430 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.003442 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:42.003450 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:42.003511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:42.041938 1157708 cri.go:89] found id: ""
	I0318 13:51:42.041968 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.041977 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:42.041983 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:42.042044 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:42.079685 1157708 cri.go:89] found id: ""
	I0318 13:51:42.079718 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.079731 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:42.079740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:42.079805 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:42.118112 1157708 cri.go:89] found id: ""
	I0318 13:51:42.118144 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.118156 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:42.118164 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:42.118230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:42.157287 1157708 cri.go:89] found id: ""
	I0318 13:51:42.157319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.157331 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:42.157343 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:42.157360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:42.213006 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:42.213038 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:42.228452 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:42.228481 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:42.302523 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:42.302545 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:42.302558 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:42.387994 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:42.388062 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:40.651272 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:43.151009 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.208825 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.211676 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.310765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.313778 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.934501 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:44.949163 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:44.949245 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:44.991885 1157708 cri.go:89] found id: ""
	I0318 13:51:44.991914 1157708 logs.go:276] 0 containers: []
	W0318 13:51:44.991924 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:44.991931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:44.992008 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:45.029868 1157708 cri.go:89] found id: ""
	I0318 13:51:45.029904 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.029915 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:45.029922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:45.030017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:45.067755 1157708 cri.go:89] found id: ""
	I0318 13:51:45.067785 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.067794 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:45.067803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:45.067857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:45.106296 1157708 cri.go:89] found id: ""
	I0318 13:51:45.106323 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.106333 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:45.106339 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:45.106405 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:45.145746 1157708 cri.go:89] found id: ""
	I0318 13:51:45.145784 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.145797 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:45.145805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:45.145868 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:45.191960 1157708 cri.go:89] found id: ""
	I0318 13:51:45.191998 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.192010 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:45.192019 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:45.192089 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:45.231436 1157708 cri.go:89] found id: ""
	I0318 13:51:45.231470 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.231483 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:45.231491 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:45.231559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:45.274521 1157708 cri.go:89] found id: ""
	I0318 13:51:45.274554 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.274565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:45.274577 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:45.274595 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:45.338539 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:45.338580 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:45.353917 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:45.353947 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:45.447734 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:45.447755 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:45.447768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:45.530098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:45.530140 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:45.653161 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.150841 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.708808 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.209076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.808315 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.311406 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.077992 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:48.092203 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:48.092273 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:48.133136 1157708 cri.go:89] found id: ""
	I0318 13:51:48.133172 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.133183 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:48.133191 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:48.133259 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:48.177727 1157708 cri.go:89] found id: ""
	I0318 13:51:48.177756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.177768 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:48.177775 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:48.177843 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:48.217574 1157708 cri.go:89] found id: ""
	I0318 13:51:48.217600 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.217608 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:48.217614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:48.217676 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:48.258900 1157708 cri.go:89] found id: ""
	I0318 13:51:48.258933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.258947 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:48.258955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:48.259046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:48.299527 1157708 cri.go:89] found id: ""
	I0318 13:51:48.299562 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.299573 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:48.299581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:48.299650 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:48.339692 1157708 cri.go:89] found id: ""
	I0318 13:51:48.339723 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.339732 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:48.339740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:48.339791 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:48.378737 1157708 cri.go:89] found id: ""
	I0318 13:51:48.378764 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.378773 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:48.378779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:48.378841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:48.414593 1157708 cri.go:89] found id: ""
	I0318 13:51:48.414621 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.414629 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:48.414639 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:48.414654 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:48.430232 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:48.430264 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:48.513313 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:48.513335 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:48.513353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:48.594681 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:48.594721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:48.638681 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:48.638720 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.189510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:51.204296 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:51.204383 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:51.248285 1157708 cri.go:89] found id: ""
	I0318 13:51:51.248311 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.248331 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:51.248340 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:51.248414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:51.289022 1157708 cri.go:89] found id: ""
	I0318 13:51:51.289055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.289068 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:51.289077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:51.289144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:51.329367 1157708 cri.go:89] found id: ""
	I0318 13:51:51.329405 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.329414 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:51.329420 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:51.329477 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:51.370909 1157708 cri.go:89] found id: ""
	I0318 13:51:51.370948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.370960 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:51.370970 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:51.371043 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:51.419447 1157708 cri.go:89] found id: ""
	I0318 13:51:51.419486 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.419498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:51.419506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:51.419573 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:51.466302 1157708 cri.go:89] found id: ""
	I0318 13:51:51.466336 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.466348 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:51.466356 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:51.466441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:51.505593 1157708 cri.go:89] found id: ""
	I0318 13:51:51.505631 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.505644 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:51.505652 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:51.505724 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:51.543815 1157708 cri.go:89] found id: ""
	I0318 13:51:51.543843 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.543852 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:51.543863 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:51.543885 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.596271 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:51.596305 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:51.612441 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:51.612477 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:51.690591 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:51.690614 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:51.690631 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:51.771781 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:51.771821 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:50.650088 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:52.650307 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.710583 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.208629 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.808743 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.309915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.319626 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:54.334041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:54.334113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:54.372090 1157708 cri.go:89] found id: ""
	I0318 13:51:54.372120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.372132 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:54.372139 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:54.372196 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:54.412513 1157708 cri.go:89] found id: ""
	I0318 13:51:54.412567 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.412580 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:54.412588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:54.412662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:54.453143 1157708 cri.go:89] found id: ""
	I0318 13:51:54.453176 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.453188 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:54.453196 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:54.453262 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:54.497908 1157708 cri.go:89] found id: ""
	I0318 13:51:54.497940 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.497949 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:54.497957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:54.498025 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:54.539044 1157708 cri.go:89] found id: ""
	I0318 13:51:54.539072 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.539081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:54.539086 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:54.539151 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:54.578916 1157708 cri.go:89] found id: ""
	I0318 13:51:54.578944 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.578951 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:54.578958 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:54.579027 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:54.617339 1157708 cri.go:89] found id: ""
	I0318 13:51:54.617366 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.617375 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:54.617380 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:54.617436 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:54.661288 1157708 cri.go:89] found id: ""
	I0318 13:51:54.661309 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.661318 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:54.661328 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:54.661344 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:54.740710 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:54.740751 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:54.789136 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:54.789176 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.844585 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:54.844627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:54.860304 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:54.860351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:54.945305 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:57.445800 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:57.459294 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:57.459368 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:57.497411 1157708 cri.go:89] found id: ""
	I0318 13:51:57.497441 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.497449 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:57.497456 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:57.497521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:57.535629 1157708 cri.go:89] found id: ""
	I0318 13:51:57.535663 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.535675 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:57.535684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:57.535749 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:57.572980 1157708 cri.go:89] found id: ""
	I0318 13:51:57.573008 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.573017 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:57.573023 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:57.573071 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:57.622949 1157708 cri.go:89] found id: ""
	I0318 13:51:57.622984 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.622997 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:57.623005 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:57.623070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:57.659877 1157708 cri.go:89] found id: ""
	I0318 13:51:57.659910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.659921 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:57.659928 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:57.659991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:57.705399 1157708 cri.go:89] found id: ""
	I0318 13:51:57.705481 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.705495 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:57.705504 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:57.705566 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:57.748035 1157708 cri.go:89] found id: ""
	I0318 13:51:57.748062 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.748073 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:57.748084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:57.748144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:57.801942 1157708 cri.go:89] found id: ""
	I0318 13:51:57.801976 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.801987 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:57.801999 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:57.802017 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:57.900157 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:57.900204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:57.946179 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:57.946219 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.651363 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:57.151268 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.208925 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.708089 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.807605 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.808479 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.307740 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.000369 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:58.000412 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:58.016179 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:58.016211 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:58.101766 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:00.602151 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:00.617466 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:00.617531 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:00.661294 1157708 cri.go:89] found id: ""
	I0318 13:52:00.661328 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.661336 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:00.661342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:00.661400 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:00.706227 1157708 cri.go:89] found id: ""
	I0318 13:52:00.706257 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.706267 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:00.706275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:00.706342 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:00.746482 1157708 cri.go:89] found id: ""
	I0318 13:52:00.746515 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.746528 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:00.746536 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:00.746600 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:00.789242 1157708 cri.go:89] found id: ""
	I0318 13:52:00.789272 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.789281 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:00.789287 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:00.789348 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:00.832463 1157708 cri.go:89] found id: ""
	I0318 13:52:00.832503 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.832514 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:00.832522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:00.832581 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:00.869790 1157708 cri.go:89] found id: ""
	I0318 13:52:00.869819 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.869830 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:00.869839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:00.869904 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:00.909656 1157708 cri.go:89] found id: ""
	I0318 13:52:00.909685 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.909693 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:00.909700 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:00.909754 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:00.953818 1157708 cri.go:89] found id: ""
	I0318 13:52:00.953856 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.953868 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:00.953882 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:00.953898 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:01.032822 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:01.032848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:01.032865 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:01.111701 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:01.111747 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:01.168270 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:01.168300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:01.220376 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:01.220408 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:59.650359 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.650627 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.651830 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:00.709561 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.207829 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.808915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:06.307915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.737354 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:03.756282 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:03.756382 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:03.804716 1157708 cri.go:89] found id: ""
	I0318 13:52:03.804757 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.804768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:03.804777 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:03.804838 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:03.864559 1157708 cri.go:89] found id: ""
	I0318 13:52:03.864596 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.864609 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:03.864617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:03.864687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:03.918397 1157708 cri.go:89] found id: ""
	I0318 13:52:03.918425 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.918433 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:03.918439 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:03.918504 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:03.961729 1157708 cri.go:89] found id: ""
	I0318 13:52:03.961762 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.961773 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:03.961780 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:03.961856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:04.006261 1157708 cri.go:89] found id: ""
	I0318 13:52:04.006299 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.006311 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:04.006319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:04.006404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:04.050284 1157708 cri.go:89] found id: ""
	I0318 13:52:04.050313 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.050321 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:04.050327 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:04.050384 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:04.093789 1157708 cri.go:89] found id: ""
	I0318 13:52:04.093827 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.093839 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:04.093847 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:04.093916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:04.135047 1157708 cri.go:89] found id: ""
	I0318 13:52:04.135091 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.135110 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:04.135124 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:04.135142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:04.192899 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:04.192937 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:04.209080 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:04.209130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:04.286388 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:04.286413 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:04.286428 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:04.371836 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:04.371877 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:06.923039 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:06.938743 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:06.938826 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:06.984600 1157708 cri.go:89] found id: ""
	I0318 13:52:06.984634 1157708 logs.go:276] 0 containers: []
	W0318 13:52:06.984646 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:06.984655 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:06.984721 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:07.023849 1157708 cri.go:89] found id: ""
	I0318 13:52:07.023891 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.023914 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:07.023922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:07.023984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:07.071972 1157708 cri.go:89] found id: ""
	I0318 13:52:07.072002 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.072015 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:07.072022 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:07.072087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:07.109070 1157708 cri.go:89] found id: ""
	I0318 13:52:07.109105 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.109118 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:07.109126 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:07.109183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:07.149879 1157708 cri.go:89] found id: ""
	I0318 13:52:07.149910 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.149918 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:07.149925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:07.149990 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:07.195946 1157708 cri.go:89] found id: ""
	I0318 13:52:07.195976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.195987 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:07.195995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:07.196062 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:07.238126 1157708 cri.go:89] found id: ""
	I0318 13:52:07.238152 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.238162 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:07.238168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:07.238233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:07.278218 1157708 cri.go:89] found id: ""
	I0318 13:52:07.278255 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.278268 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:07.278282 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:07.278300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:07.294926 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:07.294955 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:07.383431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:07.383455 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:07.383468 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:07.467306 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:07.467348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:07.515996 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:07.516028 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:06.151546 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.162392 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:05.208765 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:07.210243 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:09.708076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.309045 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.807773 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.071945 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:10.088587 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:10.088654 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:10.130528 1157708 cri.go:89] found id: ""
	I0318 13:52:10.130566 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.130579 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:10.130588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:10.130663 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:10.173113 1157708 cri.go:89] found id: ""
	I0318 13:52:10.173150 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.173168 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:10.173178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:10.173243 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:10.218941 1157708 cri.go:89] found id: ""
	I0318 13:52:10.218976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.218987 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:10.218996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:10.219068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:10.262331 1157708 cri.go:89] found id: ""
	I0318 13:52:10.262368 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.262381 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:10.262389 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:10.262460 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:10.303329 1157708 cri.go:89] found id: ""
	I0318 13:52:10.303363 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.303378 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:10.303386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:10.303457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:10.344458 1157708 cri.go:89] found id: ""
	I0318 13:52:10.344486 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.344497 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:10.344505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:10.344567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:10.386753 1157708 cri.go:89] found id: ""
	I0318 13:52:10.386786 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.386797 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:10.386806 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:10.386876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:10.425922 1157708 cri.go:89] found id: ""
	I0318 13:52:10.425954 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.425965 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:10.425978 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:10.426000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:10.441134 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:10.441168 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:10.514865 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:10.514899 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:10.514916 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:10.592061 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:10.592105 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:10.642900 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:10.642935 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:10.651432 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.150537 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.208498 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:14.209684 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.808250 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:15.308639 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.199176 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:13.215155 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:13.215232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:13.256107 1157708 cri.go:89] found id: ""
	I0318 13:52:13.256139 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.256151 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:13.256160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:13.256231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:13.296562 1157708 cri.go:89] found id: ""
	I0318 13:52:13.296597 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.296608 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:13.296615 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:13.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:13.336633 1157708 cri.go:89] found id: ""
	I0318 13:52:13.336662 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.336672 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:13.336678 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:13.336737 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:13.382597 1157708 cri.go:89] found id: ""
	I0318 13:52:13.382639 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.382654 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:13.382663 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:13.382733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:13.430257 1157708 cri.go:89] found id: ""
	I0318 13:52:13.430292 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.430304 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:13.430312 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:13.430373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:13.466854 1157708 cri.go:89] found id: ""
	I0318 13:52:13.466881 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.466889 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:13.466896 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:13.466945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:13.510297 1157708 cri.go:89] found id: ""
	I0318 13:52:13.510333 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.510344 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:13.510352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:13.510420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:13.551476 1157708 cri.go:89] found id: ""
	I0318 13:52:13.551508 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.551517 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:13.551528 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:13.551542 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:13.634561 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:13.634585 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:13.634598 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:13.720088 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:13.720129 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:13.760621 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:13.760659 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:13.817311 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:13.817350 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.334094 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:16.349779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:16.349866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:16.394131 1157708 cri.go:89] found id: ""
	I0318 13:52:16.394157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.394167 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:16.394175 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:16.394239 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:16.438185 1157708 cri.go:89] found id: ""
	I0318 13:52:16.438232 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.438245 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:16.438264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:16.438335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:16.476872 1157708 cri.go:89] found id: ""
	I0318 13:52:16.476920 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.476932 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:16.476939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:16.477007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:16.518226 1157708 cri.go:89] found id: ""
	I0318 13:52:16.518253 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.518262 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:16.518269 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:16.518327 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:16.559119 1157708 cri.go:89] found id: ""
	I0318 13:52:16.559160 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.559174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:16.559182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:16.559260 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:16.600050 1157708 cri.go:89] found id: ""
	I0318 13:52:16.600079 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.600088 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:16.600094 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:16.600160 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:16.640621 1157708 cri.go:89] found id: ""
	I0318 13:52:16.640649 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.640660 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:16.640668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:16.640733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:16.680541 1157708 cri.go:89] found id: ""
	I0318 13:52:16.680571 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.680580 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:16.680590 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:16.680602 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:16.766378 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:16.766415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:16.811846 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:16.811883 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:16.871940 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:16.871981 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.887494 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:16.887521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:16.961924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:15.650599 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.650902 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:16.710336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.207426 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.807338 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.809418 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.462316 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:19.478819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:19.478885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:19.523280 1157708 cri.go:89] found id: ""
	I0318 13:52:19.523314 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.523334 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:19.523342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:19.523417 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:19.560675 1157708 cri.go:89] found id: ""
	I0318 13:52:19.560708 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.560717 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:19.560725 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:19.560790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:19.598739 1157708 cri.go:89] found id: ""
	I0318 13:52:19.598766 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.598773 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:19.598781 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:19.598846 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:19.639928 1157708 cri.go:89] found id: ""
	I0318 13:52:19.639960 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.639969 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:19.639975 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:19.640030 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:19.686084 1157708 cri.go:89] found id: ""
	I0318 13:52:19.686134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.686153 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:19.686160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:19.686231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:19.725449 1157708 cri.go:89] found id: ""
	I0318 13:52:19.725481 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.725491 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:19.725497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:19.725559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:19.763855 1157708 cri.go:89] found id: ""
	I0318 13:52:19.763886 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.763897 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:19.763905 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:19.763976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:19.805783 1157708 cri.go:89] found id: ""
	I0318 13:52:19.805813 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.805824 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:19.805836 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:19.805852 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.883873 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:19.883914 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:19.926368 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:19.926406 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:19.981137 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:19.981181 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:19.996242 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:19.996269 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:20.077880 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:22.578045 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:22.594170 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:22.594247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:22.637241 1157708 cri.go:89] found id: ""
	I0318 13:52:22.637276 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.637289 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:22.637298 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:22.637363 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:22.679877 1157708 cri.go:89] found id: ""
	I0318 13:52:22.679904 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.679912 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:22.679918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:22.679981 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:22.721865 1157708 cri.go:89] found id: ""
	I0318 13:52:22.721890 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.721903 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:22.721912 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:22.721982 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:22.763208 1157708 cri.go:89] found id: ""
	I0318 13:52:22.763242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.763255 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:22.763264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:22.763329 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:22.802038 1157708 cri.go:89] found id: ""
	I0318 13:52:22.802071 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.802081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:22.802089 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:22.802170 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:22.841206 1157708 cri.go:89] found id: ""
	I0318 13:52:22.841242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.841254 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:22.841263 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:22.841328 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:22.885159 1157708 cri.go:89] found id: ""
	I0318 13:52:22.885197 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.885209 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:22.885218 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:22.885289 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:22.925346 1157708 cri.go:89] found id: ""
	I0318 13:52:22.925373 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.925382 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:22.925391 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:22.925407 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.654611 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.152365 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:21.208979 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.210660 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.308290 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:24.310006 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.006158 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:23.006193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:23.053932 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:23.053961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:23.107728 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:23.107768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:23.125708 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:23.125740 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:23.202609 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:25.703096 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:25.718617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:25.718689 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:25.756504 1157708 cri.go:89] found id: ""
	I0318 13:52:25.756530 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.756538 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:25.756544 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:25.756608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:25.795103 1157708 cri.go:89] found id: ""
	I0318 13:52:25.795140 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.795152 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:25.795160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:25.795240 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:25.839908 1157708 cri.go:89] found id: ""
	I0318 13:52:25.839945 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.839957 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:25.839971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:25.840038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:25.881677 1157708 cri.go:89] found id: ""
	I0318 13:52:25.881711 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.881723 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:25.881732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:25.881802 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:25.923356 1157708 cri.go:89] found id: ""
	I0318 13:52:25.923386 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.923397 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:25.923410 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:25.923469 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:25.961661 1157708 cri.go:89] found id: ""
	I0318 13:52:25.961693 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.961705 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:25.961713 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:25.961785 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:26.003198 1157708 cri.go:89] found id: ""
	I0318 13:52:26.003236 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.003248 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:26.003256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:26.003319 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:26.041436 1157708 cri.go:89] found id: ""
	I0318 13:52:26.041471 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.041483 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:26.041496 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:26.041515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:26.056679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:26.056716 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:26.143900 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:26.143926 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:26.143946 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:26.226929 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:26.226964 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:26.288519 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:26.288560 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:24.652661 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.152317 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:25.708488 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.708931 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:26.807624 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.809030 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.308980 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.846205 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:28.861117 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:28.861190 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:28.906990 1157708 cri.go:89] found id: ""
	I0318 13:52:28.907022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.907030 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:28.907036 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:28.907099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:28.946271 1157708 cri.go:89] found id: ""
	I0318 13:52:28.946309 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.946322 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:28.946332 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:28.946403 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:28.990158 1157708 cri.go:89] found id: ""
	I0318 13:52:28.990185 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.990193 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:28.990199 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:28.990251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:29.035089 1157708 cri.go:89] found id: ""
	I0318 13:52:29.035123 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.035134 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:29.035143 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:29.035209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:29.076991 1157708 cri.go:89] found id: ""
	I0318 13:52:29.077022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.077033 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:29.077041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:29.077104 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:29.117106 1157708 cri.go:89] found id: ""
	I0318 13:52:29.117134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.117150 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:29.117157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:29.117209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:29.159675 1157708 cri.go:89] found id: ""
	I0318 13:52:29.159704 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.159714 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:29.159722 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:29.159787 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:29.202130 1157708 cri.go:89] found id: ""
	I0318 13:52:29.202157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.202166 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:29.202176 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:29.202189 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:29.258343 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:29.258390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:29.275314 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:29.275360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:29.359842 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:29.359989 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:29.360036 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:29.446021 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:29.446072 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:31.990431 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:32.007443 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:32.007508 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:32.051028 1157708 cri.go:89] found id: ""
	I0318 13:52:32.051061 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.051070 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:32.051076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:32.051144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:32.092914 1157708 cri.go:89] found id: ""
	I0318 13:52:32.092950 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.092962 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:32.092972 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:32.093045 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:32.154257 1157708 cri.go:89] found id: ""
	I0318 13:52:32.154291 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.154302 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:32.154309 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:32.154375 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:32.200185 1157708 cri.go:89] found id: ""
	I0318 13:52:32.200224 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.200236 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:32.200244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:32.200309 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:32.248927 1157708 cri.go:89] found id: ""
	I0318 13:52:32.248961 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.248974 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:32.248982 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:32.249051 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:32.289829 1157708 cri.go:89] found id: ""
	I0318 13:52:32.289861 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.289870 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:32.289876 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:32.289934 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:32.334346 1157708 cri.go:89] found id: ""
	I0318 13:52:32.334379 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.334387 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:32.334393 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:32.334457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:32.378718 1157708 cri.go:89] found id: ""
	I0318 13:52:32.378761 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.378770 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:32.378780 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:32.378795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:32.434626 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:32.434667 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:32.451366 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:32.451402 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:32.532868 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:32.532907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:32.532924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:32.617556 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:32.617597 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:29.650409 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.651019 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:30.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:32.214101 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:34.710602 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:33.807499 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.807738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.165067 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:35.181325 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:35.181404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:35.220570 1157708 cri.go:89] found id: ""
	I0318 13:52:35.220601 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.220612 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:35.220619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:35.220684 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:35.263798 1157708 cri.go:89] found id: ""
	I0318 13:52:35.263830 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.263841 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:35.263848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:35.263915 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:35.309447 1157708 cri.go:89] found id: ""
	I0318 13:52:35.309477 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.309489 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:35.309497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:35.309567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:35.353444 1157708 cri.go:89] found id: ""
	I0318 13:52:35.353472 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.353484 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:35.353493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:35.353556 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:35.394563 1157708 cri.go:89] found id: ""
	I0318 13:52:35.394591 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.394599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:35.394604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:35.394662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:35.433866 1157708 cri.go:89] found id: ""
	I0318 13:52:35.433899 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.433908 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:35.433915 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:35.433970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:35.482769 1157708 cri.go:89] found id: ""
	I0318 13:52:35.482808 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.482820 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:35.482829 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:35.482899 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:35.521465 1157708 cri.go:89] found id: ""
	I0318 13:52:35.521498 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.521509 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:35.521520 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:35.521534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:35.577759 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:35.577799 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:35.593052 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:35.593084 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:35.672751 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:35.672773 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:35.672787 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:35.752118 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:35.752171 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:34.157429 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:36.650725 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.652096 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:37.209435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:39.710020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.312679 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:40.807379 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.296677 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:38.312261 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:38.312365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:38.350328 1157708 cri.go:89] found id: ""
	I0318 13:52:38.350362 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.350374 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:38.350382 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:38.350457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:38.389891 1157708 cri.go:89] found id: ""
	I0318 13:52:38.389927 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.389939 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:38.389947 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:38.390005 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:38.430268 1157708 cri.go:89] found id: ""
	I0318 13:52:38.430296 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.430305 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:38.430311 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:38.430365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:38.470830 1157708 cri.go:89] found id: ""
	I0318 13:52:38.470859 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.470873 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:38.470880 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:38.470945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:38.510501 1157708 cri.go:89] found id: ""
	I0318 13:52:38.510538 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.510552 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:38.510560 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:38.510618 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:38.594899 1157708 cri.go:89] found id: ""
	I0318 13:52:38.594926 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.594935 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:38.594942 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:38.595021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:38.649095 1157708 cri.go:89] found id: ""
	I0318 13:52:38.649121 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.649129 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:38.649136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:38.649192 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:38.695263 1157708 cri.go:89] found id: ""
	I0318 13:52:38.695295 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.695307 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:38.695320 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:38.695336 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:38.780624 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:38.780666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:38.825294 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:38.825335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:38.877548 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:38.877596 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:38.893289 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:38.893319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:38.971752 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.472865 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:41.487371 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:41.487484 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:41.524691 1157708 cri.go:89] found id: ""
	I0318 13:52:41.524724 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.524737 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:41.524746 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:41.524812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:41.564094 1157708 cri.go:89] found id: ""
	I0318 13:52:41.564125 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.564137 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:41.564145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:41.564210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:41.600019 1157708 cri.go:89] found id: ""
	I0318 13:52:41.600047 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.600058 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:41.600064 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:41.600142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:41.638320 1157708 cri.go:89] found id: ""
	I0318 13:52:41.638350 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.638363 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:41.638372 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:41.638438 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:41.680763 1157708 cri.go:89] found id: ""
	I0318 13:52:41.680798 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.680810 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:41.680818 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:41.680894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:41.720645 1157708 cri.go:89] found id: ""
	I0318 13:52:41.720674 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.720683 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:41.720690 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:41.720741 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:41.759121 1157708 cri.go:89] found id: ""
	I0318 13:52:41.759151 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.759185 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:41.759195 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:41.759264 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:41.797006 1157708 cri.go:89] found id: ""
	I0318 13:52:41.797034 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.797043 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:41.797053 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:41.797070 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:41.853315 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:41.853353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:41.869920 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:41.869952 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:41.947187 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.947219 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:41.947235 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:42.025475 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:42.025515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:41.151466 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.153616 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:42.207999 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.709760 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.310812 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:45.808394 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.574724 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:44.598990 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:44.599068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:44.649051 1157708 cri.go:89] found id: ""
	I0318 13:52:44.649137 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.649168 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:44.649180 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:44.649254 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:44.686423 1157708 cri.go:89] found id: ""
	I0318 13:52:44.686459 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.686468 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:44.686473 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:44.686536 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:44.726534 1157708 cri.go:89] found id: ""
	I0318 13:52:44.726564 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.726575 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:44.726583 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:44.726653 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:44.771190 1157708 cri.go:89] found id: ""
	I0318 13:52:44.771220 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.771232 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:44.771240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:44.771311 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:44.811577 1157708 cri.go:89] found id: ""
	I0318 13:52:44.811602 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.811611 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:44.811618 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:44.811677 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:44.850717 1157708 cri.go:89] found id: ""
	I0318 13:52:44.850744 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.850756 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:44.850765 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:44.850824 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:44.890294 1157708 cri.go:89] found id: ""
	I0318 13:52:44.890321 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.890330 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:44.890344 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:44.890401 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:44.930690 1157708 cri.go:89] found id: ""
	I0318 13:52:44.930720 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.930730 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:44.930741 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:44.930757 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:44.946509 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:44.946544 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:45.029748 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:45.029777 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:45.029795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:45.111348 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:45.111392 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:45.165156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:45.165193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:47.720701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:47.734457 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:47.734520 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:47.771273 1157708 cri.go:89] found id: ""
	I0318 13:52:47.771304 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.771313 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:47.771319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:47.771370 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:47.813779 1157708 cri.go:89] found id: ""
	I0318 13:52:47.813806 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.813816 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:47.813824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:47.813892 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:47.855547 1157708 cri.go:89] found id: ""
	I0318 13:52:47.855576 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.855584 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:47.855590 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:47.855640 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:47.892651 1157708 cri.go:89] found id: ""
	I0318 13:52:47.892684 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.892692 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:47.892697 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:47.892752 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:47.935457 1157708 cri.go:89] found id: ""
	I0318 13:52:47.935488 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.935498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:47.935505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:47.935567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:47.969335 1157708 cri.go:89] found id: ""
	I0318 13:52:47.969361 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.969370 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:47.969377 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:47.969441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:45.651171 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.151833 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:47.209014 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:49.710231 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.310467 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:50.807495 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.007305 1157708 cri.go:89] found id: ""
	I0318 13:52:48.007339 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.007349 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:48.007355 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:48.007416 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:48.050230 1157708 cri.go:89] found id: ""
	I0318 13:52:48.050264 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.050276 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:48.050289 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:48.050304 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:48.106946 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:48.106993 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:48.123805 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:48.123837 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:48.201881 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:48.201907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:48.201920 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:48.281533 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:48.281577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:50.829561 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:50.847462 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:50.847555 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:50.889731 1157708 cri.go:89] found id: ""
	I0318 13:52:50.889759 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.889768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:50.889774 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:50.889831 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:50.928176 1157708 cri.go:89] found id: ""
	I0318 13:52:50.928210 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.928222 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:50.928231 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:50.928294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:50.965737 1157708 cri.go:89] found id: ""
	I0318 13:52:50.965772 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.965786 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:50.965794 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:50.965866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:51.008038 1157708 cri.go:89] found id: ""
	I0318 13:52:51.008072 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.008081 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:51.008087 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:51.008159 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:51.050310 1157708 cri.go:89] found id: ""
	I0318 13:52:51.050340 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.050355 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:51.050363 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:51.050431 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:51.090514 1157708 cri.go:89] found id: ""
	I0318 13:52:51.090541 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.090550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:51.090556 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:51.090608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:51.131278 1157708 cri.go:89] found id: ""
	I0318 13:52:51.131305 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.131313 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:51.131320 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:51.131381 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:51.173370 1157708 cri.go:89] found id: ""
	I0318 13:52:51.173400 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.173411 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:51.173437 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:51.173464 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:51.260155 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:51.260204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:51.309963 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:51.309998 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:51.367838 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:51.367889 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:51.382542 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:51.382570 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:51.459258 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:50.650524 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.651804 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.208655 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:54.209701 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.808292 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:55.309417 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:53.960212 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:53.978939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:53.979004 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:54.030003 1157708 cri.go:89] found id: ""
	I0318 13:52:54.030038 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.030052 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:54.030060 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:54.030134 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:54.073487 1157708 cri.go:89] found id: ""
	I0318 13:52:54.073523 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.073535 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:54.073543 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:54.073611 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:54.115982 1157708 cri.go:89] found id: ""
	I0318 13:52:54.116010 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.116022 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:54.116029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:54.116099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:54.158320 1157708 cri.go:89] found id: ""
	I0318 13:52:54.158348 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.158359 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:54.158366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:54.158433 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:54.198911 1157708 cri.go:89] found id: ""
	I0318 13:52:54.198939 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.198948 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:54.198955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:54.199010 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:54.240628 1157708 cri.go:89] found id: ""
	I0318 13:52:54.240659 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.240671 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:54.240679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:54.240750 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:54.279377 1157708 cri.go:89] found id: ""
	I0318 13:52:54.279409 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.279418 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:54.279424 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:54.279493 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:54.324160 1157708 cri.go:89] found id: ""
	I0318 13:52:54.324192 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.324205 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:54.324218 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:54.324237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:54.371487 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:54.371527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:54.423487 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:54.423526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:54.438773 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:54.438800 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:54.518788 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:54.518810 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:54.518825 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.103590 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:57.118866 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:57.118932 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:57.159354 1157708 cri.go:89] found id: ""
	I0318 13:52:57.159383 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.159393 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:57.159399 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:57.159458 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:57.201114 1157708 cri.go:89] found id: ""
	I0318 13:52:57.201148 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.201159 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:57.201167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:57.201233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:57.242172 1157708 cri.go:89] found id: ""
	I0318 13:52:57.242207 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.242217 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:57.242224 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:57.242287 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:57.282578 1157708 cri.go:89] found id: ""
	I0318 13:52:57.282617 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.282629 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:57.282637 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:57.282706 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:57.323682 1157708 cri.go:89] found id: ""
	I0318 13:52:57.323707 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.323715 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:57.323721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:57.323771 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:57.364946 1157708 cri.go:89] found id: ""
	I0318 13:52:57.364980 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.364991 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:57.365003 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:57.365076 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:57.407466 1157708 cri.go:89] found id: ""
	I0318 13:52:57.407495 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.407505 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:57.407511 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:57.407568 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:57.454663 1157708 cri.go:89] found id: ""
	I0318 13:52:57.454692 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.454701 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:57.454710 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:57.454722 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:57.509591 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:57.509633 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:57.525125 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:57.525155 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:57.602819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:57.602845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:57.602863 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.689001 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:57.689045 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:55.150589 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.152149 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:56.708493 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.208099 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.311780 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.312048 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:00.234252 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:00.249526 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:00.249615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:00.290131 1157708 cri.go:89] found id: ""
	I0318 13:53:00.290160 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.290171 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:00.290178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:00.290230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:00.337794 1157708 cri.go:89] found id: ""
	I0318 13:53:00.337828 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.337840 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:00.337848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:00.337907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:00.378188 1157708 cri.go:89] found id: ""
	I0318 13:53:00.378224 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.378236 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:00.378244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:00.378313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:00.418940 1157708 cri.go:89] found id: ""
	I0318 13:53:00.418972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.418981 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:00.418987 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:00.419039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:00.461471 1157708 cri.go:89] found id: ""
	I0318 13:53:00.461502 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.461511 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:00.461518 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:00.461572 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:00.498781 1157708 cri.go:89] found id: ""
	I0318 13:53:00.498812 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.498821 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:00.498827 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:00.498885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:00.540359 1157708 cri.go:89] found id: ""
	I0318 13:53:00.540395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.540407 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:00.540414 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:00.540480 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:00.583597 1157708 cri.go:89] found id: ""
	I0318 13:53:00.583628 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.583636 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:00.583648 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:00.583666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:00.639498 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:00.639534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:00.655764 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:00.655792 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:00.742351 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:00.742386 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:00.742400 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:00.825250 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:00.825298 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:59.651495 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.651843 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.709438 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.208439 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.810519 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.308525 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:03.373938 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:03.389723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:03.389796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:03.429675 1157708 cri.go:89] found id: ""
	I0318 13:53:03.429710 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.429723 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:03.429732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:03.429803 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:03.468732 1157708 cri.go:89] found id: ""
	I0318 13:53:03.468768 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.468780 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:03.468788 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:03.468841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:03.510562 1157708 cri.go:89] found id: ""
	I0318 13:53:03.510589 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.510598 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:03.510604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:03.510667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:03.549842 1157708 cri.go:89] found id: ""
	I0318 13:53:03.549896 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.549909 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:03.549918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:03.549984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:03.590036 1157708 cri.go:89] found id: ""
	I0318 13:53:03.590076 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.590086 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:03.590093 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:03.590146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:03.635546 1157708 cri.go:89] found id: ""
	I0318 13:53:03.635573 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.635585 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:03.635593 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:03.635660 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:03.678634 1157708 cri.go:89] found id: ""
	I0318 13:53:03.678663 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.678671 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:03.678677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:03.678735 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:03.719666 1157708 cri.go:89] found id: ""
	I0318 13:53:03.719698 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.719709 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:03.719721 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:03.719736 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:03.762353 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:03.762388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:03.817484 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:03.817521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:03.832820 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:03.832850 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:03.913094 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:03.913115 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:03.913130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:06.502556 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:06.517682 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:06.517745 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:06.562167 1157708 cri.go:89] found id: ""
	I0318 13:53:06.562202 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.562215 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:06.562223 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:06.562294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:06.601910 1157708 cri.go:89] found id: ""
	I0318 13:53:06.601945 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.601954 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:06.601962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:06.602022 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:06.640652 1157708 cri.go:89] found id: ""
	I0318 13:53:06.640683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.640694 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:06.640702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:06.640778 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:06.686781 1157708 cri.go:89] found id: ""
	I0318 13:53:06.686809 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.686818 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:06.686824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:06.686893 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:06.727080 1157708 cri.go:89] found id: ""
	I0318 13:53:06.727107 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.727115 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:06.727121 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:06.727173 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:06.764550 1157708 cri.go:89] found id: ""
	I0318 13:53:06.764575 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.764583 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:06.764589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:06.764641 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:06.803978 1157708 cri.go:89] found id: ""
	I0318 13:53:06.804009 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.804019 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:06.804027 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:06.804091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:06.843983 1157708 cri.go:89] found id: ""
	I0318 13:53:06.844016 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.844027 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:06.844040 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:06.844058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:06.905389 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:06.905424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:06.956888 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:06.956924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:06.973551 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:06.973594 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:07.045945 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:07.045973 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:07.045991 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:04.150852 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.151454 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.656073 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.211223 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.707939 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.808218 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.309991 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:11.310190 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.635227 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:09.650166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:09.650246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:09.695126 1157708 cri.go:89] found id: ""
	I0318 13:53:09.695153 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.695162 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:09.695168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:09.695221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:09.740475 1157708 cri.go:89] found id: ""
	I0318 13:53:09.740507 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.740516 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:09.740522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:09.740591 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:09.779078 1157708 cri.go:89] found id: ""
	I0318 13:53:09.779108 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.779119 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:09.779128 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:09.779186 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:09.821252 1157708 cri.go:89] found id: ""
	I0318 13:53:09.821285 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.821297 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:09.821306 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:09.821376 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:09.860500 1157708 cri.go:89] found id: ""
	I0318 13:53:09.860537 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.860550 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:09.860558 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:09.860622 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:09.903447 1157708 cri.go:89] found id: ""
	I0318 13:53:09.903475 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.903486 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:09.903494 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:09.903550 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:09.941620 1157708 cri.go:89] found id: ""
	I0318 13:53:09.941648 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.941661 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:09.941679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:09.941731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:09.980066 1157708 cri.go:89] found id: ""
	I0318 13:53:09.980101 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.980113 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:09.980125 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:09.980142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:10.036960 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:10.037000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:10.051329 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:10.051361 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:10.130896 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:10.130925 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:10.130942 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:10.212205 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:10.212236 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:12.754623 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:12.769956 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:12.770034 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:12.809006 1157708 cri.go:89] found id: ""
	I0318 13:53:12.809032 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.809043 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:12.809051 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:12.809113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:12.852354 1157708 cri.go:89] found id: ""
	I0318 13:53:12.852390 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.852400 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:12.852407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:12.852476 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:12.891891 1157708 cri.go:89] found id: ""
	I0318 13:53:12.891923 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.891933 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:12.891940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:12.891991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:12.931753 1157708 cri.go:89] found id: ""
	I0318 13:53:12.931785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.931795 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:12.931803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:12.931872 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:12.971622 1157708 cri.go:89] found id: ""
	I0318 13:53:12.971653 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.971662 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:12.971669 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:12.971731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:11.151234 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.157081 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:10.708177 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.209203 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.315183 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.808738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.009893 1157708 cri.go:89] found id: ""
	I0318 13:53:13.009930 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.009943 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:13.009952 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:13.010021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:13.045361 1157708 cri.go:89] found id: ""
	I0318 13:53:13.045396 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.045404 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:13.045411 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:13.045474 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:13.087659 1157708 cri.go:89] found id: ""
	I0318 13:53:13.087686 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.087696 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:13.087706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:13.087721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:13.129979 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:13.130014 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:13.183802 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:13.183836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:13.198808 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:13.198840 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:13.272736 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:13.272764 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:13.272783 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:15.870196 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:15.887480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:15.887551 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:15.923871 1157708 cri.go:89] found id: ""
	I0318 13:53:15.923899 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.923907 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:15.923913 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:15.923976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:15.963870 1157708 cri.go:89] found id: ""
	I0318 13:53:15.963906 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.963917 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:15.963925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:15.963997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:16.009781 1157708 cri.go:89] found id: ""
	I0318 13:53:16.009815 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.009828 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:16.009837 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:16.009905 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:16.047673 1157708 cri.go:89] found id: ""
	I0318 13:53:16.047708 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.047718 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:16.047727 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:16.047793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:16.089419 1157708 cri.go:89] found id: ""
	I0318 13:53:16.089447 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.089455 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:16.089461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:16.089511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:16.133563 1157708 cri.go:89] found id: ""
	I0318 13:53:16.133594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.133604 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:16.133611 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:16.133685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:16.174369 1157708 cri.go:89] found id: ""
	I0318 13:53:16.174404 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.174415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:16.174423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:16.174491 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:16.219334 1157708 cri.go:89] found id: ""
	I0318 13:53:16.219360 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.219367 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:16.219376 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:16.219389 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:16.273468 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:16.273507 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:16.288584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:16.288612 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:16.366575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:16.366602 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:16.366620 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:16.451031 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:16.451071 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:15.650907 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.151434 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.708015 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:17.710036 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.311437 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.807854 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.997536 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:19.014995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:19.015065 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:19.064686 1157708 cri.go:89] found id: ""
	I0318 13:53:19.064719 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.064731 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:19.064739 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:19.064793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:19.110598 1157708 cri.go:89] found id: ""
	I0318 13:53:19.110629 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.110640 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:19.110648 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:19.110739 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:19.156628 1157708 cri.go:89] found id: ""
	I0318 13:53:19.156652 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.156660 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:19.156668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:19.156730 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:19.205993 1157708 cri.go:89] found id: ""
	I0318 13:53:19.206029 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.206042 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:19.206049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:19.206118 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:19.253902 1157708 cri.go:89] found id: ""
	I0318 13:53:19.253935 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.253952 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:19.253960 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:19.254036 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:19.296550 1157708 cri.go:89] found id: ""
	I0318 13:53:19.296583 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.296594 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:19.296602 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:19.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:19.337316 1157708 cri.go:89] found id: ""
	I0318 13:53:19.337349 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.337360 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:19.337369 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:19.337446 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:19.381503 1157708 cri.go:89] found id: ""
	I0318 13:53:19.381546 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.381565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:19.381579 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:19.381603 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:19.461665 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:19.461691 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:19.461707 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:19.548291 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:19.548348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:19.591296 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:19.591335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:19.648740 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:19.648776 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.164970 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:22.180740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:22.180806 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:22.223787 1157708 cri.go:89] found id: ""
	I0318 13:53:22.223820 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.223833 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:22.223840 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:22.223908 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:22.266751 1157708 cri.go:89] found id: ""
	I0318 13:53:22.266785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.266797 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:22.266805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:22.266876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:22.311669 1157708 cri.go:89] found id: ""
	I0318 13:53:22.311701 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.311712 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:22.311721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:22.311816 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:22.354687 1157708 cri.go:89] found id: ""
	I0318 13:53:22.354722 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.354733 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:22.354742 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:22.354807 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:22.395741 1157708 cri.go:89] found id: ""
	I0318 13:53:22.395767 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.395776 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:22.395782 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:22.395832 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:22.434506 1157708 cri.go:89] found id: ""
	I0318 13:53:22.434539 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.434550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:22.434559 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:22.434612 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:22.474583 1157708 cri.go:89] found id: ""
	I0318 13:53:22.474612 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.474621 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:22.474627 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:22.474690 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:22.521898 1157708 cri.go:89] found id: ""
	I0318 13:53:22.521943 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.521955 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:22.521968 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:22.521989 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.537679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:22.537711 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:22.619575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:22.619605 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:22.619621 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:22.704206 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:22.704265 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:22.753470 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:22.753502 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:20.650340 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.653036 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.213398 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.709150 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.808837 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.308831 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.311578 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:25.329917 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:25.329979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:25.373784 1157708 cri.go:89] found id: ""
	I0318 13:53:25.373818 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.373826 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:25.373833 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:25.373901 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:25.422490 1157708 cri.go:89] found id: ""
	I0318 13:53:25.422516 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.422526 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:25.422532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:25.422597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:25.459523 1157708 cri.go:89] found id: ""
	I0318 13:53:25.459552 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.459560 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:25.459567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:25.459627 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:25.495647 1157708 cri.go:89] found id: ""
	I0318 13:53:25.495683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.495695 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:25.495702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:25.495772 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:25.534582 1157708 cri.go:89] found id: ""
	I0318 13:53:25.534617 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.534626 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:25.534632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:25.534704 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:25.577526 1157708 cri.go:89] found id: ""
	I0318 13:53:25.577558 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.577566 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:25.577573 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:25.577687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:25.616403 1157708 cri.go:89] found id: ""
	I0318 13:53:25.616433 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.616445 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:25.616453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:25.616527 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:25.660444 1157708 cri.go:89] found id: ""
	I0318 13:53:25.660474 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.660482 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:25.660492 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:25.660506 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:25.715595 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:25.715641 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:25.730358 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:25.730390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:25.803153 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:25.803239 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:25.803261 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:25.885339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:25.885388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:25.150276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.151389 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.214042 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.710185 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.807095 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:29.807177 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:28.433506 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:28.449402 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:28.449481 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:28.490972 1157708 cri.go:89] found id: ""
	I0318 13:53:28.491007 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.491019 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:28.491028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:28.491094 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:28.531406 1157708 cri.go:89] found id: ""
	I0318 13:53:28.531439 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.531451 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:28.531460 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:28.531513 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:28.570299 1157708 cri.go:89] found id: ""
	I0318 13:53:28.570334 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.570345 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:28.570352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:28.570408 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:28.607950 1157708 cri.go:89] found id: ""
	I0318 13:53:28.607979 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.607987 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:28.607994 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:28.608066 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:28.648710 1157708 cri.go:89] found id: ""
	I0318 13:53:28.648744 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.648755 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:28.648762 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:28.648830 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:28.691071 1157708 cri.go:89] found id: ""
	I0318 13:53:28.691102 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.691114 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:28.691122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:28.691183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:28.734399 1157708 cri.go:89] found id: ""
	I0318 13:53:28.734438 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.734452 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:28.734461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:28.734548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:28.774859 1157708 cri.go:89] found id: ""
	I0318 13:53:28.774891 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.774902 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:28.774912 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:28.774927 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:28.831420 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:28.831459 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:28.847970 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:28.848008 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:28.926007 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:28.926034 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:28.926051 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:29.007525 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:29.007577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:31.555401 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:31.570964 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:31.571046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:31.611400 1157708 cri.go:89] found id: ""
	I0318 13:53:31.611427 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.611438 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:31.611445 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:31.611510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:31.654572 1157708 cri.go:89] found id: ""
	I0318 13:53:31.654602 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.654614 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:31.654622 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:31.654725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:31.692649 1157708 cri.go:89] found id: ""
	I0318 13:53:31.692673 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.692681 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:31.692686 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:31.692748 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:31.732208 1157708 cri.go:89] found id: ""
	I0318 13:53:31.732233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.732244 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:31.732253 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:31.732320 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:31.774132 1157708 cri.go:89] found id: ""
	I0318 13:53:31.774163 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.774172 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:31.774178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:31.774234 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:31.813558 1157708 cri.go:89] found id: ""
	I0318 13:53:31.813582 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.813590 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:31.813597 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:31.813651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:31.862024 1157708 cri.go:89] found id: ""
	I0318 13:53:31.862057 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.862070 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:31.862077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:31.862146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:31.903941 1157708 cri.go:89] found id: ""
	I0318 13:53:31.903972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.903982 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:31.903992 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:31.904006 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:31.957327 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:31.957366 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:31.973337 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:31.973380 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:32.053702 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:32.053730 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:32.053744 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:32.134859 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:32.134911 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:29.649648 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.651426 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.651936 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:30.208512 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:32.709020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.808276 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.811370 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:36.314374 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:34.683335 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:34.700383 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:34.700490 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:34.744387 1157708 cri.go:89] found id: ""
	I0318 13:53:34.744420 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.744432 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:34.744441 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:34.744509 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:34.788122 1157708 cri.go:89] found id: ""
	I0318 13:53:34.788150 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.788160 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:34.788166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:34.788221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:34.834760 1157708 cri.go:89] found id: ""
	I0318 13:53:34.834795 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.834808 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:34.834817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:34.834894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:34.882028 1157708 cri.go:89] found id: ""
	I0318 13:53:34.882062 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.882073 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:34.882081 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:34.882150 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:34.933339 1157708 cri.go:89] found id: ""
	I0318 13:53:34.933364 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.933374 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:34.933384 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:34.933451 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:34.972362 1157708 cri.go:89] found id: ""
	I0318 13:53:34.972395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.972407 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:34.972416 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:34.972486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:35.008949 1157708 cri.go:89] found id: ""
	I0318 13:53:35.008986 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.008999 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:35.009007 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:35.009080 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:35.054698 1157708 cri.go:89] found id: ""
	I0318 13:53:35.054733 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.054742 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:35.054756 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:35.054770 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:35.109391 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:35.109450 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:35.126785 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:35.126818 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:35.214303 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:35.214329 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:35.214342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:35.298705 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:35.298750 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:37.843701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:37.859330 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:37.859415 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:37.903428 1157708 cri.go:89] found id: ""
	I0318 13:53:37.903466 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.903479 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:37.903497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:37.903560 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:37.943687 1157708 cri.go:89] found id: ""
	I0318 13:53:37.943716 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.943727 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:37.943735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:37.943804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:37.986201 1157708 cri.go:89] found id: ""
	I0318 13:53:37.986233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.986244 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:37.986252 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:37.986322 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:36.151976 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.152281 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:35.209205 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:37.709122 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.806794 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.807552 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.026776 1157708 cri.go:89] found id: ""
	I0318 13:53:38.026813 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.026825 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:38.026832 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:38.026907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:38.073057 1157708 cri.go:89] found id: ""
	I0318 13:53:38.073088 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.073098 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:38.073105 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:38.073172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:38.110576 1157708 cri.go:89] found id: ""
	I0318 13:53:38.110611 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.110624 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:38.110632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:38.110702 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:38.154293 1157708 cri.go:89] found id: ""
	I0318 13:53:38.154319 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.154327 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:38.154338 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:38.154414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:38.195407 1157708 cri.go:89] found id: ""
	I0318 13:53:38.195434 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.195444 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:38.195454 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:38.195469 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:38.254159 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:38.254210 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:38.269143 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:38.269175 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:38.349819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:38.349845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:38.349864 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:38.435121 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:38.435164 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.982438 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:40.998483 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:40.998559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:41.037470 1157708 cri.go:89] found id: ""
	I0318 13:53:41.037497 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.037506 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:41.037512 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:41.037583 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:41.078428 1157708 cri.go:89] found id: ""
	I0318 13:53:41.078463 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.078473 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:41.078482 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:41.078548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:41.121342 1157708 cri.go:89] found id: ""
	I0318 13:53:41.121371 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.121382 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:41.121391 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:41.121482 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:41.164124 1157708 cri.go:89] found id: ""
	I0318 13:53:41.164149 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.164159 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:41.164167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:41.164229 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:41.210294 1157708 cri.go:89] found id: ""
	I0318 13:53:41.210321 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.210329 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:41.210336 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:41.210407 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:41.253934 1157708 cri.go:89] found id: ""
	I0318 13:53:41.253957 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.253967 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:41.253973 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:41.254039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:41.298817 1157708 cri.go:89] found id: ""
	I0318 13:53:41.298849 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.298861 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:41.298870 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:41.298936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:41.344109 1157708 cri.go:89] found id: ""
	I0318 13:53:41.344137 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.344146 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:41.344156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:41.344170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:41.401026 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:41.401061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:41.416197 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:41.416229 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:41.495349 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:41.495375 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:41.495393 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:41.578201 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:41.578253 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.651687 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:43.152619 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.208445 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.208613 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.210573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.808665 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:45.309099 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.126601 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:44.140971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:44.141048 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:44.184758 1157708 cri.go:89] found id: ""
	I0318 13:53:44.184786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.184794 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:44.184801 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:44.184851 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:44.230793 1157708 cri.go:89] found id: ""
	I0318 13:53:44.230824 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.230836 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:44.230842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:44.230916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:44.269561 1157708 cri.go:89] found id: ""
	I0318 13:53:44.269594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.269606 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:44.269614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:44.269680 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:44.310847 1157708 cri.go:89] found id: ""
	I0318 13:53:44.310878 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.310889 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:44.310898 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:44.310970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:44.350827 1157708 cri.go:89] found id: ""
	I0318 13:53:44.350860 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.350878 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:44.350887 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:44.350956 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:44.389693 1157708 cri.go:89] found id: ""
	I0318 13:53:44.389721 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.389730 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:44.389735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:44.389804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:44.429254 1157708 cri.go:89] found id: ""
	I0318 13:53:44.429280 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.429289 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:44.429303 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:44.429354 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:44.468484 1157708 cri.go:89] found id: ""
	I0318 13:53:44.468513 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.468525 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:44.468538 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:44.468555 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:44.525012 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:44.525058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:44.541638 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:44.541668 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:44.621779 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:44.621801 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:44.621814 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:44.706797 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:44.706884 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:47.253569 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:47.268808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:47.268888 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:47.313191 1157708 cri.go:89] found id: ""
	I0318 13:53:47.313220 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.313232 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:47.313240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:47.313307 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:47.357567 1157708 cri.go:89] found id: ""
	I0318 13:53:47.357600 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.357611 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:47.357619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:47.357688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:47.392300 1157708 cri.go:89] found id: ""
	I0318 13:53:47.392341 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.392352 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:47.392366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:47.392437 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:47.432800 1157708 cri.go:89] found id: ""
	I0318 13:53:47.432830 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.432842 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:47.432857 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:47.432921 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:47.469563 1157708 cri.go:89] found id: ""
	I0318 13:53:47.469591 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.469599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:47.469605 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:47.469668 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:47.508770 1157708 cri.go:89] found id: ""
	I0318 13:53:47.508799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.508810 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:47.508820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:47.508880 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:47.549876 1157708 cri.go:89] found id: ""
	I0318 13:53:47.549909 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.549921 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:47.549930 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:47.549997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:47.591385 1157708 cri.go:89] found id: ""
	I0318 13:53:47.591413 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.591421 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:47.591431 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:47.591446 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:47.646284 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:47.646313 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:47.662609 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:47.662639 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:47.737371 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:47.737398 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:47.737415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:47.817311 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:47.817342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:45.652845 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.150199 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:46.707734 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.709977 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:47.807238 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.308767 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:50.380029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:50.380109 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:50.427452 1157708 cri.go:89] found id: ""
	I0318 13:53:50.427484 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.427496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:50.427505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:50.427579 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:50.466766 1157708 cri.go:89] found id: ""
	I0318 13:53:50.466793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.466801 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:50.466808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:50.466894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:50.506768 1157708 cri.go:89] found id: ""
	I0318 13:53:50.506799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.506811 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:50.506819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:50.506882 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:50.545554 1157708 cri.go:89] found id: ""
	I0318 13:53:50.545592 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.545605 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:50.545613 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:50.545685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:50.583949 1157708 cri.go:89] found id: ""
	I0318 13:53:50.583984 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.583995 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:50.584004 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:50.584083 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:50.624730 1157708 cri.go:89] found id: ""
	I0318 13:53:50.624763 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.624774 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:50.624783 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:50.624853 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:50.664300 1157708 cri.go:89] found id: ""
	I0318 13:53:50.664346 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.664358 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:50.664366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:50.664420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:50.702760 1157708 cri.go:89] found id: ""
	I0318 13:53:50.702793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.702805 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:50.702817 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:50.702833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:50.757188 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:50.757237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:50.772151 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:50.772195 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:50.856872 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:50.856898 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:50.856917 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:50.937706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:50.937749 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:50.654814 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.151970 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.710233 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.209443 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:52.309529 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:54.809399 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.481836 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:53.497792 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:53.497856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:53.535376 1157708 cri.go:89] found id: ""
	I0318 13:53:53.535411 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.535420 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:53.535427 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:53.535486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:53.575002 1157708 cri.go:89] found id: ""
	I0318 13:53:53.575030 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.575042 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:53.575050 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:53.575119 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:53.615880 1157708 cri.go:89] found id: ""
	I0318 13:53:53.615919 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.615931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:53.615940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:53.616007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:53.681746 1157708 cri.go:89] found id: ""
	I0318 13:53:53.681786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.681799 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:53.681810 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:53.681887 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:53.725219 1157708 cri.go:89] found id: ""
	I0318 13:53:53.725241 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.725250 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:53.725256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:53.725317 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:53.766969 1157708 cri.go:89] found id: ""
	I0318 13:53:53.767006 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.767018 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:53.767026 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:53.767091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:53.802103 1157708 cri.go:89] found id: ""
	I0318 13:53:53.802134 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.802145 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:53.802157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:53.802210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:53.843054 1157708 cri.go:89] found id: ""
	I0318 13:53:53.843085 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.843093 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:53.843103 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:53.843117 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:53.899794 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:53.899836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:53.915559 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:53.915592 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:53.996410 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:53.996438 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:53.996456 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:54.085588 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:54.085628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:56.632201 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:56.648183 1157708 kubeadm.go:591] duration metric: took 4m3.550073086s to restartPrimaryControlPlane
	W0318 13:53:56.648381 1157708 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:53:56.648422 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:53:55.152626 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.650951 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:55.209511 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.709324 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.710029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.666187 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.017736279s)
	I0318 13:53:59.666270 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:53:59.682887 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:53:59.694626 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:53:59.706577 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:53:59.706599 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:53:59.706648 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:53:59.718311 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:53:59.718371 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:53:59.729298 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:53:59.741351 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:53:59.741401 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:53:59.753652 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.765642 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:53:59.765695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.778055 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:53:59.789994 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:53:59.790042 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:53:59.801292 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:53:59.879414 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:53:59.879516 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:00.046477 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:00.046660 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:00.046819 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:00.257070 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:00.259191 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:00.259333 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:00.259434 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:00.259549 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:00.259658 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:00.259782 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:00.259857 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:00.259949 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:00.260033 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:00.260136 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:00.260244 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:00.260299 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:00.260394 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:00.423400 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:00.543983 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:00.796108 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:00.901121 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:00.918891 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:00.920502 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:00.920642 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:01.094176 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:53:57.306878 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.308670 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:01.096397 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:54:01.096539 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:01.107816 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:01.108753 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:01.109641 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:01.111913 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:00.150985 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.151139 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.208577 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.209527 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.701940 1157416 pod_ready.go:81] duration metric: took 4m0.000915275s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:04.701995 1157416 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:04.702022 1157416 pod_ready.go:38] duration metric: took 4m12.048388069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:04.702063 1157416 kubeadm.go:591] duration metric: took 4m22.220919415s to restartPrimaryControlPlane
	W0318 13:54:04.702133 1157416 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:04.702168 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:01.807445 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.308435 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.151252 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.152296 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.162574 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.809148 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.811335 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:11.306999 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:10.650696 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:12.651741 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:13.308835 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.807754 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.150875 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:17.653698 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:18.308137 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.308720 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.152545 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.650685 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.807655 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:24.807765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:25.150664 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:27.650092 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:26.808311 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:29.311683 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:31.301320 1157887 pod_ready.go:81] duration metric: took 4m0.001048401s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:31.301351 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:31.301372 1157887 pod_ready.go:38] duration metric: took 4m12.063560637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:31.301397 1157887 kubeadm.go:591] duration metric: took 4m19.202321881s to restartPrimaryControlPlane
	W0318 13:54:31.301478 1157887 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:31.301505 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:29.651334 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:32.152059 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:34.651230 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.151130 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.018723 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.31652367s)
	I0318 13:54:37.018822 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:54:37.036348 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:54:37.047932 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:54:37.058846 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:54:37.058875 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:54:37.058920 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:54:37.069333 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:54:37.069396 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:54:37.080053 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:54:37.090110 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:54:37.090170 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:54:37.101032 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.111052 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:54:37.111124 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.121867 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:54:37.132057 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:54:37.132104 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:54:37.143057 1157416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:54:37.368813 1157416 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:54:41.111826 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:54:41.111977 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:41.112236 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:39.151250 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:41.652026 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:43.652929 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.082340 1157416 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 13:54:46.082410 1157416 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:46.082482 1157416 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:46.082561 1157416 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:46.082639 1157416 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:46.082692 1157416 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:46.084374 1157416 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:46.084495 1157416 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:46.084584 1157416 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:46.084681 1157416 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:46.084767 1157416 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:46.084844 1157416 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:46.084933 1157416 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:46.085039 1157416 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:46.085131 1157416 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:46.085255 1157416 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:46.085344 1157416 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:46.085415 1157416 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:46.085491 1157416 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:46.085569 1157416 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:46.085637 1157416 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 13:54:46.085704 1157416 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:46.085791 1157416 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:46.085894 1157416 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:46.086010 1157416 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:46.086104 1157416 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:54:46.087481 1157416 out.go:204]   - Booting up control plane ...
	I0318 13:54:46.087576 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:46.087642 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:46.087698 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:46.087782 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:46.087865 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:46.087917 1157416 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:46.088051 1157416 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:46.088146 1157416 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003020 seconds
	I0318 13:54:46.088306 1157416 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:54:46.088501 1157416 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:54:46.088585 1157416 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:54:46.088770 1157416 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-537236 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:54:46.088826 1157416 kubeadm.go:309] [bootstrap-token] Using token: fk6yfh.vd0dmh72kd97vm2h
	I0318 13:54:46.091265 1157416 out.go:204]   - Configuring RBAC rules ...
	I0318 13:54:46.091375 1157416 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:54:46.091449 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:54:46.091656 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:54:46.091839 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:54:46.092014 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:54:46.092136 1157416 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:54:46.092289 1157416 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:54:46.092370 1157416 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:54:46.092436 1157416 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:54:46.092445 1157416 kubeadm.go:309] 
	I0318 13:54:46.092513 1157416 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:54:46.092522 1157416 kubeadm.go:309] 
	I0318 13:54:46.092588 1157416 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:54:46.092594 1157416 kubeadm.go:309] 
	I0318 13:54:46.092614 1157416 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:54:46.092704 1157416 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:54:46.092749 1157416 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:54:46.092755 1157416 kubeadm.go:309] 
	I0318 13:54:46.092805 1157416 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:54:46.092818 1157416 kubeadm.go:309] 
	I0318 13:54:46.092892 1157416 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:54:46.092906 1157416 kubeadm.go:309] 
	I0318 13:54:46.092982 1157416 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:54:46.093100 1157416 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:54:46.093212 1157416 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:54:46.093225 1157416 kubeadm.go:309] 
	I0318 13:54:46.093335 1157416 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:54:46.093448 1157416 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:54:46.093457 1157416 kubeadm.go:309] 
	I0318 13:54:46.093539 1157416 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.093684 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:54:46.093717 1157416 kubeadm.go:309] 	--control-plane 
	I0318 13:54:46.093723 1157416 kubeadm.go:309] 
	I0318 13:54:46.093848 1157416 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:54:46.093860 1157416 kubeadm.go:309] 
	I0318 13:54:46.093946 1157416 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.094071 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:54:46.094105 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:54:46.094119 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:54:46.095717 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:54:46.112502 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:46.112797 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:46.152713 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:48.651676 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.096953 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:54:46.127007 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:54:46.178588 1157416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:54:46.178768 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:46.178785 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-537236 minikube.k8s.io/updated_at=2024_03_18T13_54_46_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=no-preload-537236 minikube.k8s.io/primary=true
	I0318 13:54:46.231974 1157416 ops.go:34] apiserver oom_adj: -16
	I0318 13:54:46.582048 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.082295 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.582447 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.082146 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.583155 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.082463 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.583104 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.153753 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:53.654740 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:50.082163 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:50.582159 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.082921 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.582616 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.082686 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.582520 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.082920 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.582281 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.082711 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.582110 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.112956 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:56.113210 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:55.082805 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:55.583034 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.082777 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.582491 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.082739 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.582854 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.082715 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.189802 1157416 kubeadm.go:1107] duration metric: took 12.011111335s to wait for elevateKubeSystemPrivileges
	W0318 13:54:58.189865 1157416 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:54:58.189878 1157416 kubeadm.go:393] duration metric: took 5m15.77131157s to StartCluster
	I0318 13:54:58.189991 1157416 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.190130 1157416 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:54:58.191965 1157416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.192315 1157416 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:54:58.194158 1157416 out.go:177] * Verifying Kubernetes components...
	I0318 13:54:58.192460 1157416 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:54:58.192549 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:54:58.194270 1157416 addons.go:69] Setting storage-provisioner=true in profile "no-preload-537236"
	I0318 13:54:58.195604 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:54:58.195628 1157416 addons.go:234] Setting addon storage-provisioner=true in "no-preload-537236"
	W0318 13:54:58.195646 1157416 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:54:58.194275 1157416 addons.go:69] Setting default-storageclass=true in profile "no-preload-537236"
	I0318 13:54:58.195741 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.195748 1157416 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-537236"
	I0318 13:54:58.194278 1157416 addons.go:69] Setting metrics-server=true in profile "no-preload-537236"
	I0318 13:54:58.195816 1157416 addons.go:234] Setting addon metrics-server=true in "no-preload-537236"
	W0318 13:54:58.195835 1157416 addons.go:243] addon metrics-server should already be in state true
	I0318 13:54:58.195864 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.196133 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196177 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196187 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196224 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196236 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196256 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.218212 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0318 13:54:58.218703 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34827
	I0318 13:54:58.218934 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0318 13:54:58.219717 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.219858 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220143 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220417 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220443 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220478 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220497 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220628 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220650 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220882 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220950 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220973 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.221491 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.221527 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.221736 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.222116 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.222138 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.226247 1157416 addons.go:234] Setting addon default-storageclass=true in "no-preload-537236"
	W0318 13:54:58.226271 1157416 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:54:58.226303 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.226691 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.226719 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.238772 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0318 13:54:58.239288 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.239925 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.239954 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.240375 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.240581 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.241297 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0318 13:54:58.241774 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.242300 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.242321 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.242787 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.243001 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.243033 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.245371 1157416 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:54:58.245038 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.246964 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:54:58.246981 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:54:58.246429 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0318 13:54:58.247010 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.248738 1157416 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:54:54.143902 1157263 pod_ready.go:81] duration metric: took 4m0.000627482s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:54.143947 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:54.143967 1157263 pod_ready.go:38] duration metric: took 4m9.565422592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:54.143994 1157263 kubeadm.go:591] duration metric: took 4m17.754456341s to restartPrimaryControlPlane
	W0318 13:54:54.144061 1157263 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:54.144092 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:58.247424 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.250418 1157416 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.250441 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:54:58.250459 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.250666 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.250683 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.250733 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251012 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.251354 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.251384 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251730 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.252053 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.252082 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.252627 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.252823 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.252974 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.253647 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254073 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.254102 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254393 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.254599 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.254720 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.254858 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.275785 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0318 13:54:58.276467 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.277007 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.277037 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.277396 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.277594 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.279419 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.279699 1157416 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.279719 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:54:58.279740 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.282813 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283168 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.283198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283319 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.283505 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.283643 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.283826 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.433881 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:54:58.466338 1157416 node_ready.go:35] waiting up to 6m0s for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485186 1157416 node_ready.go:49] node "no-preload-537236" has status "Ready":"True"
	I0318 13:54:58.485217 1157416 node_ready.go:38] duration metric: took 18.833477ms for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485230 1157416 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:58.527030 1157416 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545133 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.545175 1157416 pod_ready.go:81] duration metric: took 18.11215ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545191 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560108 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.560144 1157416 pod_ready.go:81] duration metric: took 14.943161ms for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560159 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.562894 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:54:58.562924 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:54:58.572477 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.572510 1157416 pod_ready.go:81] duration metric: took 12.342242ms for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.572523 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.594618 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.597140 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.644132 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:54:58.644166 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:54:58.734467 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:58.734499 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:54:58.760623 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:59.005259 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005305 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005668 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005692 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.005704 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005713 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005981 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005996 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.006028 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.020654 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.020682 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.022812 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.022814 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.022850 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.979647 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.382455448s)
	I0318 13:54:59.979723 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.979743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980124 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980223 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.980258 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.980281 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.980354 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980675 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980756 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.982424 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.270401 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.509719085s)
	I0318 13:55:00.270464 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.270481 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.272779 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.272794 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.272817 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.272828 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.272837 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.274705 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.274734 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.274759 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.274789 1157416 addons.go:470] Verifying addon metrics-server=true in "no-preload-537236"
	I0318 13:55:00.276931 1157416 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 13:55:00.278586 1157416 addons.go:505] duration metric: took 2.086117916s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 13:55:00.607578 1157416 pod_ready.go:92] pod "kube-proxy-6c4c5" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.607607 1157416 pod_ready.go:81] duration metric: took 2.035076209s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.607620 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626505 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.626531 1157416 pod_ready.go:81] duration metric: took 18.904572ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626540 1157416 pod_ready.go:38] duration metric: took 2.141296876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:00.626556 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:00.626612 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:00.677379 1157416 api_server.go:72] duration metric: took 2.484994048s to wait for apiserver process to appear ...
	I0318 13:55:00.677406 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:00.677426 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:55:00.694161 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:55:00.696445 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:55:00.696479 1157416 api_server.go:131] duration metric: took 19.065082ms to wait for apiserver health ...
	I0318 13:55:00.696492 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:00.707383 1157416 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:00.707417 1157416 system_pods.go:61] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:00.707421 1157416 system_pods.go:61] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:00.707425 1157416 system_pods.go:61] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:00.707429 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:00.707432 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:00.707435 1157416 system_pods.go:61] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:00.707438 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:00.707445 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:00.707450 1157416 system_pods.go:61] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:00.707459 1157416 system_pods.go:74] duration metric: took 10.96036ms to wait for pod list to return data ...
	I0318 13:55:00.707467 1157416 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:00.870267 1157416 default_sa.go:45] found service account: "default"
	I0318 13:55:00.870299 1157416 default_sa.go:55] duration metric: took 162.825175ms for default service account to be created ...
	I0318 13:55:00.870310 1157416 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:01.073950 1157416 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:01.073985 1157416 system_pods.go:89] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:01.073992 1157416 system_pods.go:89] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:01.073998 1157416 system_pods.go:89] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:01.074004 1157416 system_pods.go:89] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:01.074010 1157416 system_pods.go:89] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:01.074017 1157416 system_pods.go:89] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:01.074035 1157416 system_pods.go:89] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:01.074055 1157416 system_pods.go:89] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:01.074069 1157416 system_pods.go:89] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:01.074085 1157416 system_pods.go:126] duration metric: took 203.766894ms to wait for k8s-apps to be running ...
	I0318 13:55:01.074100 1157416 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:01.074152 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:01.091165 1157416 system_svc.go:56] duration metric: took 17.056217ms WaitForService to wait for kubelet
	I0318 13:55:01.091195 1157416 kubeadm.go:576] duration metric: took 2.898817514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:01.091224 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:01.270664 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:01.270724 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:01.270737 1157416 node_conditions.go:105] duration metric: took 179.506857ms to run NodePressure ...
	I0318 13:55:01.270750 1157416 start.go:240] waiting for startup goroutines ...
	I0318 13:55:01.270758 1157416 start.go:245] waiting for cluster config update ...
	I0318 13:55:01.270769 1157416 start.go:254] writing updated cluster config ...
	I0318 13:55:01.271069 1157416 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:01.325353 1157416 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 13:55:01.327367 1157416 out.go:177] * Done! kubectl is now configured to use "no-preload-537236" cluster and "default" namespace by default
	I0318 13:55:03.715412 1157887 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.413874479s)
	I0318 13:55:03.715519 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:03.732767 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:03.743375 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:03.753393 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:03.753414 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:03.753457 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:55:03.763226 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:03.763289 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:03.774001 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:55:03.783943 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:03.783991 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:03.794580 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.803881 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:03.803921 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.813709 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:55:03.823096 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:03.823138 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:03.832790 1157887 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:03.891459 1157887 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:03.891672 1157887 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:04.056923 1157887 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:04.057055 1157887 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:04.057197 1157887 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:04.312932 1157887 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:04.314955 1157887 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:04.315063 1157887 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:04.315156 1157887 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:04.315286 1157887 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:04.315388 1157887 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:04.315490 1157887 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:04.315568 1157887 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:04.315668 1157887 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:04.315743 1157887 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:04.315844 1157887 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:04.315969 1157887 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:04.316034 1157887 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:04.316108 1157887 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:04.643155 1157887 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:04.927731 1157887 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:05.058875 1157887 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:05.221520 1157887 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:05.221985 1157887 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:05.224297 1157887 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:05.226200 1157887 out.go:204]   - Booting up control plane ...
	I0318 13:55:05.226326 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:05.226425 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:05.226520 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:05.244878 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:05.245461 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:05.245531 1157887 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:05.388215 1157887 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:11.393083 1157887 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004356 seconds
	I0318 13:55:11.393511 1157887 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:11.412586 1157887 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:11.939563 1157887 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:11.939844 1157887 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-569210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:12.457349 1157887 kubeadm.go:309] [bootstrap-token] Using token: z44dyw.tsw47dmn862zavdi
	I0318 13:55:12.458855 1157887 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:12.459037 1157887 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:12.466850 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:12.482822 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:12.488920 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:12.496947 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:12.507954 1157887 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:12.535337 1157887 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:12.763814 1157887 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:12.877248 1157887 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:12.878047 1157887 kubeadm.go:309] 
	I0318 13:55:12.878159 1157887 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:12.878183 1157887 kubeadm.go:309] 
	I0318 13:55:12.878291 1157887 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:12.878301 1157887 kubeadm.go:309] 
	I0318 13:55:12.878334 1157887 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:12.878432 1157887 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:12.878519 1157887 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:12.878531 1157887 kubeadm.go:309] 
	I0318 13:55:12.878603 1157887 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:12.878615 1157887 kubeadm.go:309] 
	I0318 13:55:12.878690 1157887 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:12.878703 1157887 kubeadm.go:309] 
	I0318 13:55:12.878762 1157887 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:12.878858 1157887 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:12.878974 1157887 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:12.878985 1157887 kubeadm.go:309] 
	I0318 13:55:12.879087 1157887 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:12.879164 1157887 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:12.879171 1157887 kubeadm.go:309] 
	I0318 13:55:12.879275 1157887 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879410 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:12.879464 1157887 kubeadm.go:309] 	--control-plane 
	I0318 13:55:12.879484 1157887 kubeadm.go:309] 
	I0318 13:55:12.879576 1157887 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:12.879586 1157887 kubeadm.go:309] 
	I0318 13:55:12.879719 1157887 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879871 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:12.883383 1157887 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:12.883432 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:55:12.883447 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:12.885248 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:12.886708 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:12.929444 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:13.043416 1157887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:13.043541 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.043567 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-569210 minikube.k8s.io/updated_at=2024_03_18T13_55_13_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=default-k8s-diff-port-569210 minikube.k8s.io/primary=true
	I0318 13:55:13.064927 1157887 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:13.286093 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.786780 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.286728 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.786442 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.287103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.786443 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.287138 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.113672 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:16.113963 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:16.787069 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.286490 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.786317 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.286840 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.786872 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.286911 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.786554 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.286216 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.786282 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.286590 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.787103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.286966 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.786928 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.286275 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.786464 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.286791 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.787028 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.938400 1157887 kubeadm.go:1107] duration metric: took 11.894943444s to wait for elevateKubeSystemPrivileges
	W0318 13:55:24.938440 1157887 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:24.938448 1157887 kubeadm.go:393] duration metric: took 5m12.933246555s to StartCluster
	I0318 13:55:24.938470 1157887 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.938621 1157887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:24.940984 1157887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.941286 1157887 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:24.943151 1157887 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:24.941329 1157887 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:24.941469 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:24.944770 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:24.944780 1157887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944830 1157887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.944845 1157887 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:24.944846 1157887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944851 1157887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944880 1157887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:24.944888 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	W0318 13:55:24.944897 1157887 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:24.944927 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.944881 1157887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-569210"
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945350 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945375 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945400 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945460 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.963173 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0318 13:55:24.963820 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.964695 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.964725 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.965120 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.965696 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.965735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.965976 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0318 13:55:24.966207 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0318 13:55:24.966502 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.966598 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.967058 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967062 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967083 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967100 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967467 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967603 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967671 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.968107 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.968146 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.971673 1157887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.971696 1157887 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:24.971729 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.972091 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.972129 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.986041 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0318 13:55:24.986481 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.986989 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.987009 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.987352 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.987605 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0318 13:55:24.987613 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.988061 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.988481 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.988499 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.988904 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.989082 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.989785 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.992033 1157887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:24.990673 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.991225 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0318 13:55:24.993532 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:24.993557 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:24.993587 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.995449 1157887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:24.994077 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.996749 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997153 1157887 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:24.997171 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:24.997191 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.997431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:24.997463 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:24.997466 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997665 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.997684 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.997746 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:24.998183 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.998273 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:24.998497 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:24.998701 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.998735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.999951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.000454 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000676 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.000865 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.001021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.001160 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.016442 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
	I0318 13:55:25.016827 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:25.017300 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:25.017328 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:25.017686 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:25.017906 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:25.019440 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:25.019694 1157887 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.019711 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:25.019731 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:25.022079 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022370 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.022398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022497 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.022645 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.022762 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.022937 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.188474 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:25.208092 1157887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218757 1157887 node_ready.go:49] node "default-k8s-diff-port-569210" has status "Ready":"True"
	I0318 13:55:25.218789 1157887 node_ready.go:38] duration metric: took 10.658955ms for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218829 1157887 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:25.224381 1157887 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235938 1157887 pod_ready.go:92] pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.235962 1157887 pod_ready.go:81] duration metric: took 11.550686ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235971 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.242985 1157887 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.243014 1157887 pod_ready.go:81] duration metric: took 7.034818ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.243027 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255777 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.255801 1157887 pod_ready.go:81] duration metric: took 12.766918ms for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255811 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.301824 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:25.301846 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:25.330301 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:25.348473 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:25.348500 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:25.365746 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.398074 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:25.398099 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:25.423951 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:27.292115 1157887 pod_ready.go:92] pod "kube-proxy-2pp8z" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.292202 1157887 pod_ready.go:81] duration metric: took 2.036383518s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.292227 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299705 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.299732 1157887 pod_ready.go:81] duration metric: took 7.486631ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299743 1157887 pod_ready.go:38] duration metric: took 2.08090143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:27.299762 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:27.299824 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:27.706241 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.375885124s)
	I0318 13:55:27.706314 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706326 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706330 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.340547601s)
	I0318 13:55:27.706377 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706392 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706630 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.282631636s)
	I0318 13:55:27.706900 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.706828 1157887 api_server.go:72] duration metric: took 2.765497711s to wait for apiserver process to appear ...
	I0318 13:55:27.706940 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:27.706879 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.706979 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.706996 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707024 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706916 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706985 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:55:27.707343 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707366 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707372 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.707405 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707417 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707426 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707455 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.707682 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707696 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707706 1157887 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:27.708614 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.708664 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.708694 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.708783 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.709092 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.709151 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.709175 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.718110 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:55:27.719497 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:27.719518 1157887 api_server.go:131] duration metric: took 12.563372ms to wait for apiserver health ...
	I0318 13:55:27.719526 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:27.739882 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.739914 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.740263 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.740296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.740318 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.742102 1157887 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0318 13:55:27.368024 1157263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.223901258s)
	I0318 13:55:27.368118 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.388474 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:27.402749 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:27.417121 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:27.417184 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:27.417235 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:27.429920 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:27.429997 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:27.442468 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:27.454842 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:27.454913 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:27.467911 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.480201 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:27.480272 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.496430 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:27.512020 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:27.512092 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:27.528102 1157263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:27.601072 1157263 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:27.601235 1157263 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:27.796445 1157263 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:27.796574 1157263 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:27.796730 1157263 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:28.079026 1157263 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:27.743429 1157887 addons.go:505] duration metric: took 2.802098895s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0318 13:55:27.744694 1157887 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:27.744727 1157887 system_pods.go:61] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.744733 1157887 system_pods.go:61] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.744738 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.744744 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.744750 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.744756 1157887 system_pods.go:61] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.744764 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.744777 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.744783 1157887 system_pods.go:61] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending
	I0318 13:55:27.744797 1157887 system_pods.go:74] duration metric: took 25.264322ms to wait for pod list to return data ...
	I0318 13:55:27.744810 1157887 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:27.755398 1157887 default_sa.go:45] found service account: "default"
	I0318 13:55:27.755427 1157887 default_sa.go:55] duration metric: took 10.607153ms for default service account to be created ...
	I0318 13:55:27.755439 1157887 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:27.815477 1157887 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:27.815507 1157887 system_pods.go:89] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.815512 1157887 system_pods.go:89] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.815517 1157887 system_pods.go:89] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.815521 1157887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.815526 1157887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.815529 1157887 system_pods.go:89] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.815533 1157887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.815540 1157887 system_pods.go:89] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.815546 1157887 system_pods.go:89] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:27.815557 1157887 system_pods.go:126] duration metric: took 60.111832ms to wait for k8s-apps to be running ...
	I0318 13:55:27.815566 1157887 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:27.815610 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.834266 1157887 system_svc.go:56] duration metric: took 18.687554ms WaitForService to wait for kubelet
	I0318 13:55:27.834304 1157887 kubeadm.go:576] duration metric: took 2.892974502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:27.834345 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:28.013031 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:28.013095 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:28.013148 1157887 node_conditions.go:105] duration metric: took 178.79502ms to run NodePressure ...
	I0318 13:55:28.013169 1157887 start.go:240] waiting for startup goroutines ...
	I0318 13:55:28.013181 1157887 start.go:245] waiting for cluster config update ...
	I0318 13:55:28.013199 1157887 start.go:254] writing updated cluster config ...
	I0318 13:55:28.013519 1157887 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:28.092810 1157887 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:28.095783 1157887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-569210" cluster and "default" namespace by default
	I0318 13:55:28.080939 1157263 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:28.081056 1157263 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:28.081145 1157263 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:28.081249 1157263 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:28.082078 1157263 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:28.082860 1157263 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:28.083397 1157263 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:28.084597 1157263 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:28.084941 1157263 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:28.085603 1157263 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:28.086461 1157263 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:28.087265 1157263 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:28.087343 1157263 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:28.348996 1157263 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:28.516513 1157263 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:28.585513 1157263 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:28.817150 1157263 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:28.817900 1157263 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:28.820280 1157263 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:28.822114 1157263 out.go:204]   - Booting up control plane ...
	I0318 13:55:28.822217 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:28.822811 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:28.825310 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:28.845906 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:28.847013 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:28.847069 1157263 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:28.992421 1157263 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:35.495384 1157263 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502688 seconds
	I0318 13:55:35.495578 1157263 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:35.517088 1157263 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:36.049915 1157263 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:36.050163 1157263 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-173036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:36.571450 1157263 kubeadm.go:309] [bootstrap-token] Using token: a1fi6l.v36l7wrnalucsepl
	I0318 13:55:36.573263 1157263 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:36.573448 1157263 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:36.581322 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:36.594853 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:36.598538 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:36.602430 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:36.605534 1157263 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:36.621332 1157263 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:36.865518 1157263 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:36.990015 1157263 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:36.991079 1157263 kubeadm.go:309] 
	I0318 13:55:36.991168 1157263 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:36.991181 1157263 kubeadm.go:309] 
	I0318 13:55:36.991288 1157263 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:36.991299 1157263 kubeadm.go:309] 
	I0318 13:55:36.991320 1157263 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:36.991395 1157263 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:36.991475 1157263 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:36.991494 1157263 kubeadm.go:309] 
	I0318 13:55:36.991572 1157263 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:36.991581 1157263 kubeadm.go:309] 
	I0318 13:55:36.991646 1157263 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:36.991658 1157263 kubeadm.go:309] 
	I0318 13:55:36.991737 1157263 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:36.991839 1157263 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:36.991954 1157263 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:36.991966 1157263 kubeadm.go:309] 
	I0318 13:55:36.992073 1157263 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:36.992174 1157263 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:36.992186 1157263 kubeadm.go:309] 
	I0318 13:55:36.992304 1157263 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992477 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:36.992522 1157263 kubeadm.go:309] 	--control-plane 
	I0318 13:55:36.992532 1157263 kubeadm.go:309] 
	I0318 13:55:36.992642 1157263 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:36.992656 1157263 kubeadm.go:309] 
	I0318 13:55:36.992769 1157263 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992922 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:36.994542 1157263 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:36.994648 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:55:36.994660 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:36.996526 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:36.997929 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:37.047757 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:37.075078 1157263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:37.075167 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.075199 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-173036 minikube.k8s.io/updated_at=2024_03_18T13_55_37_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=embed-certs-173036 minikube.k8s.io/primary=true
	I0318 13:55:37.236857 1157263 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:37.422453 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.922622 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.423527 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.922743 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.422721 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.923438 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.422599 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.923170 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.422812 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.922526 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.422594 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.922835 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.423479 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.923114 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.422672 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.922883 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.422863 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.922770 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.423473 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.923125 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.423378 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.923366 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.422566 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.923231 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.422505 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.554542 1157263 kubeadm.go:1107] duration metric: took 12.479441091s to wait for elevateKubeSystemPrivileges
	W0318 13:55:49.554590 1157263 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:49.554602 1157263 kubeadm.go:393] duration metric: took 5m13.226983757s to StartCluster
	I0318 13:55:49.554626 1157263 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.554778 1157263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:49.556962 1157263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.557273 1157263 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:49.558774 1157263 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:49.557321 1157263 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:49.557488 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:49.560195 1157263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-173036"
	I0318 13:55:49.560201 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:49.560211 1157263 addons.go:69] Setting metrics-server=true in profile "embed-certs-173036"
	I0318 13:55:49.560237 1157263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-173036"
	I0318 13:55:49.560247 1157263 addons.go:234] Setting addon metrics-server=true in "embed-certs-173036"
	W0318 13:55:49.560254 1157263 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:49.560201 1157263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-173036"
	I0318 13:55:49.560282 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560302 1157263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-173036"
	W0318 13:55:49.560317 1157263 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:49.560388 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560644 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560676 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560678 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560716 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560777 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560803 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.577682 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0318 13:55:49.577714 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0318 13:55:49.578101 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0318 13:55:49.578261 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578285 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578493 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578880 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578907 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.578882 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578923 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579013 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.579036 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579302 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579333 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579538 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.579598 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579914 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.579955 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.580203 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.580238 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.583587 1157263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-173036"
	W0318 13:55:49.583610 1157263 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:49.583641 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.584009 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.584040 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.596862 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0318 13:55:49.597356 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.597859 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.598026 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.598110 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0318 13:55:49.598635 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599310 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.599331 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.599405 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0318 13:55:49.599732 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599874 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.600120 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.600135 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.600197 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.600439 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.601019 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.601052 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.602172 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.604115 1157263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:49.606034 1157263 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.606049 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:49.606065 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.603277 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.606323 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.608600 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.610213 1157263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:49.611511 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:49.611531 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:49.611545 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.609758 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.611598 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.611613 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.610550 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.611727 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.611868 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.611991 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.614689 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.615322 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.615403 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615531 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.615672 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.615773 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.620257 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0318 13:55:49.620653 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.621225 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.621243 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.621610 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.621790 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.623303 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.623566 1157263 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:49.623580 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:49.623594 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.626325 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.626733 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.626755 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.627028 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.627196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.627335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.627441 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.791524 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:49.847829 1157263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860595 1157263 node_ready.go:49] node "embed-certs-173036" has status "Ready":"True"
	I0318 13:55:49.860621 1157263 node_ready.go:38] duration metric: took 12.757412ms for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860631 1157263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:49.870524 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:49.917170 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:49.917197 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:49.965845 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:49.965871 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:49.969600 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.982887 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:50.023768 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:50.023795 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:50.139120 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:51.877589 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-ft594" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:51.877618 1157263 pod_ready.go:81] duration metric: took 2.007066644s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:51.877634 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.007908 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.02498147s)
	I0318 13:55:52.007966 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.007979 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008318 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008378 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008383 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.008408 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.008427 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008713 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008853 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.009491 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.039858476s)
	I0318 13:55:52.009567 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.009595 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010239 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010242 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010276 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.010289 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.010301 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010553 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010568 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010578 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.026035 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.026056 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.026364 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.026385 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.202596 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.063427726s)
	I0318 13:55:52.202663 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.202686 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.202999 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203021 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203032 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.203040 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.203321 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203338 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203352 1157263 addons.go:470] Verifying addon metrics-server=true in "embed-certs-173036"
	I0318 13:55:52.205372 1157263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 13:55:52.207184 1157263 addons.go:505] duration metric: took 2.649872416s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 13:55:52.391839 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.391878 1157263 pod_ready.go:81] duration metric: took 514.235543ms for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.391891 1157263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398044 1157263 pod_ready.go:92] pod "etcd-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.398075 1157263 pod_ready.go:81] duration metric: took 6.176672ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398091 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403790 1157263 pod_ready.go:92] pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.403809 1157263 pod_ready.go:81] duration metric: took 5.70927ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403817 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414956 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.414976 1157263 pod_ready.go:81] duration metric: took 11.153442ms for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414986 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674125 1157263 pod_ready.go:92] pod "kube-proxy-lp9mc" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.674151 1157263 pod_ready.go:81] duration metric: took 259.158776ms for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674160 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075385 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:53.075420 1157263 pod_ready.go:81] duration metric: took 401.251175ms for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075432 1157263 pod_ready.go:38] duration metric: took 3.214790175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:53.075452 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:53.075523 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:53.092916 1157263 api_server.go:72] duration metric: took 3.53560403s to wait for apiserver process to appear ...
	I0318 13:55:53.092948 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:53.093027 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:55:53.098715 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:55:53.100073 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:53.100102 1157263 api_server.go:131] duration metric: took 7.134408ms to wait for apiserver health ...
	I0318 13:55:53.100113 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:53.278961 1157263 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:53.278993 1157263 system_pods.go:61] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.278998 1157263 system_pods.go:61] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.279002 1157263 system_pods.go:61] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.279005 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.279010 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.279013 1157263 system_pods.go:61] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.279017 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.279023 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.279026 1157263 system_pods.go:61] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.279037 1157263 system_pods.go:74] duration metric: took 178.915393ms to wait for pod list to return data ...
	I0318 13:55:53.279047 1157263 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:53.475094 1157263 default_sa.go:45] found service account: "default"
	I0318 13:55:53.475123 1157263 default_sa.go:55] duration metric: took 196.069593ms for default service account to be created ...
	I0318 13:55:53.475133 1157263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:53.678384 1157263 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:53.678413 1157263 system_pods.go:89] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.678418 1157263 system_pods.go:89] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.678422 1157263 system_pods.go:89] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.678427 1157263 system_pods.go:89] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.678431 1157263 system_pods.go:89] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.678436 1157263 system_pods.go:89] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.678439 1157263 system_pods.go:89] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.678447 1157263 system_pods.go:89] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.678455 1157263 system_pods.go:89] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.678464 1157263 system_pods.go:126] duration metric: took 203.32588ms to wait for k8s-apps to be running ...
	I0318 13:55:53.678473 1157263 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:53.678531 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:53.698244 1157263 system_svc.go:56] duration metric: took 19.758793ms WaitForService to wait for kubelet
	I0318 13:55:53.698279 1157263 kubeadm.go:576] duration metric: took 4.140974066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:53.698307 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:53.876137 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:53.876162 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:53.876173 1157263 node_conditions.go:105] duration metric: took 177.861272ms to run NodePressure ...
	I0318 13:55:53.876184 1157263 start.go:240] waiting for startup goroutines ...
	I0318 13:55:53.876191 1157263 start.go:245] waiting for cluster config update ...
	I0318 13:55:53.876202 1157263 start.go:254] writing updated cluster config ...
	I0318 13:55:53.876907 1157263 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:53.931596 1157263 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:53.933499 1157263 out.go:177] * Done! kubectl is now configured to use "embed-certs-173036" cluster and "default" namespace by default
	I0318 13:55:56.115397 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:56.115674 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:56.115714 1157708 kubeadm.go:309] 
	I0318 13:55:56.115782 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:55:56.115840 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:55:56.115849 1157708 kubeadm.go:309] 
	I0318 13:55:56.115908 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:55:56.115979 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:55:56.116102 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:55:56.116112 1157708 kubeadm.go:309] 
	I0318 13:55:56.116242 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:55:56.116289 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:55:56.116349 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:55:56.116370 1157708 kubeadm.go:309] 
	I0318 13:55:56.116506 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:55:56.116645 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:55:56.116665 1157708 kubeadm.go:309] 
	I0318 13:55:56.116804 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:55:56.116897 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:55:56.117005 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:55:56.117094 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:55:56.117110 1157708 kubeadm.go:309] 
	I0318 13:55:56.117680 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:56.117813 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:55:56.117934 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 13:55:56.118052 1157708 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 13:55:56.118124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:55:57.920938 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.802776126s)
	I0318 13:55:57.921031 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:57.939226 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:57.952304 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:57.952342 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:57.952404 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:57.964632 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:57.964695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:57.977306 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:57.989728 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:57.989790 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:58.001661 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.013078 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:58.013160 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.024891 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:58.036171 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:58.036225 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:58.048156 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:58.128356 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:55:58.128445 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:58.297704 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:58.297897 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:58.298048 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:58.515521 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:58.517569 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:58.517679 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:58.517760 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:58.517830 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:58.517908 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:58.517980 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:58.518047 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:58.518280 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:58.519078 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:58.520081 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:58.521268 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:58.521861 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:58.521936 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:58.762418 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:58.999746 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:59.214448 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:59.402662 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:59.421555 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:59.423151 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:59.423233 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:59.560412 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:59.563125 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:55:59.563274 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:59.571364 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:59.572936 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:59.573987 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:59.586689 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:56:39.588627 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:56:39.588942 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:39.589128 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:44.589564 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:44.589852 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:54.590311 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:54.590619 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:14.591571 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:14.591866 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594170 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:54.594433 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594448 1157708 kubeadm.go:309] 
	I0318 13:57:54.594490 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:57:54.594540 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:57:54.594549 1157708 kubeadm.go:309] 
	I0318 13:57:54.594594 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:57:54.594641 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:57:54.594800 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:57:54.594811 1157708 kubeadm.go:309] 
	I0318 13:57:54.594950 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:57:54.595000 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:57:54.595046 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:57:54.595056 1157708 kubeadm.go:309] 
	I0318 13:57:54.595163 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:57:54.595297 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:57:54.595312 1157708 kubeadm.go:309] 
	I0318 13:57:54.595471 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:57:54.595605 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:57:54.595716 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:57:54.595812 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:57:54.595827 1157708 kubeadm.go:309] 
	I0318 13:57:54.596636 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:57:54.596805 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:57:54.596972 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 13:57:54.597014 1157708 kubeadm.go:393] duration metric: took 8m1.551231902s to StartCluster
	I0318 13:57:54.597076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:57:54.597174 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:57:54.649451 1157708 cri.go:89] found id: ""
	I0318 13:57:54.649484 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.649496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:57:54.649506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:57:54.649577 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:57:54.692278 1157708 cri.go:89] found id: ""
	I0318 13:57:54.692317 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.692339 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:57:54.692349 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:57:54.692427 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:57:54.731034 1157708 cri.go:89] found id: ""
	I0318 13:57:54.731062 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.731071 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:57:54.731077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:57:54.731135 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:57:54.769883 1157708 cri.go:89] found id: ""
	I0318 13:57:54.769913 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.769923 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:57:54.769931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:57:54.769996 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:57:54.808620 1157708 cri.go:89] found id: ""
	I0318 13:57:54.808648 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.808656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:57:54.808661 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:57:54.808715 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:57:54.849207 1157708 cri.go:89] found id: ""
	I0318 13:57:54.849245 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.849256 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:57:54.849264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:57:54.849334 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:57:54.918479 1157708 cri.go:89] found id: ""
	I0318 13:57:54.918508 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.918520 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:57:54.918528 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:57:54.918597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:57:54.958828 1157708 cri.go:89] found id: ""
	I0318 13:57:54.958861 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.958871 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:57:54.958887 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:57:54.958906 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:57:55.078045 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:57:55.078092 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:57:55.123043 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:57:55.123077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:57:55.180480 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:57:55.180518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:57:55.197264 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:57:55.197316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:57:55.291264 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0318 13:57:55.291325 1157708 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 13:57:55.291395 1157708 out.go:239] * 
	W0318 13:57:55.291477 1157708 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.291502 1157708 out.go:239] * 
	W0318 13:57:55.292511 1157708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:55.295566 1157708 out.go:177] 
	W0318 13:57:55.296840 1157708 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.296903 1157708 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 13:57:55.296941 1157708 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 13:57:55.298417 1157708 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.228606155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770670228519359,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75ce84a7-8f7b-41e5-a792-34be55e713d5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.229367190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d2f98f50-53da-406c-9392-6224030a3184 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.229443682Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d2f98f50-53da-406c-9392-6224030a3184 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.229720176Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dcf47324a868c557381ea72f8c3bc0dce56b1bab36def329bfcff91a5c25df6,PodSandboxId:887781373b9c6a80d1f5dab89fb5c714863ed9729ad1d4cccb48ca6e4237da58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770128202047530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0dfdeb1-f567-41df-98c3-7987f0fd7b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 909a6a7e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478da5c49960ae3d9ce8ceebfef95d983383433a511bacf4b880e14255fcf23c,PodSandboxId:3f372f18c0800c7cf582878db05ab3229c1abda392a8445ba0b71cd3bb79ea06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710770126180946114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pp8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912b3f56-3df6-485f-a01a-60801b867b86,},Annotations:map[string]string{io.kubernetes.container.hash: dc0ca493,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bd4c6331e90782afb37b345b8e00d754b69c3b3d838b6da08111b9ea14a5cc,PodSandboxId:6083b00f89dc2e3e8d73bc820422bb6be8042b49e1eb358b9c90a8b70469a590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126301795954,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xdcht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf264558-6c11-44c9-82d6-ea23aea43dc9,},Annotations:map[string]string{io.kubernetes.container.hash: b0cf4d2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf55fa946a658cdc020920c0be13e1ce323a9ab0292a9b403fd7059000c70e1,PodSandboxId:f032f63a719f8348105bb201a8b835af4542fe3e8587eb3012a775367c461378,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126116394809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5qxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 164d2cc3-0891-4fcd-81bd-
34d7cf0c691c,},Annotations:map[string]string{io.kubernetes.container.hash: f164053c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d2d764cf683c43937ed6f1a2a4bc29ed977ee4e55873ded3c1a90fc325a68e,PodSandboxId:0c4384bffb72e76b865b7d57a32f42eaa40e53c876b3b4f3532a009ffcde0ae6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171077010655550461
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaeb8888551fdf1fa66251dad57f99eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f3c8b45c0b31e1a68ecd382756153908da62307b99de3ac00e94a9f0592120,PodSandboxId:b5b2d5706af19ec3b6793f4101d3c0ce85e939385bf55146c9e55fe8c32b97ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107701064
71259756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be94838dec3ae56e7ccef51c225c25dd,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d4752fdf387dcccc84aca83aec73463459220ac32fdb3510bd94ae21684775,PodSandboxId:ac791bedc626582dbf0e787f2f5b5fbf9626704820c08067bf84d08856c3f972,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,Creat
edAt:1710770106475618604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef18e5c2f20506f583d8e1ef75e4966,},Annotations:map[string]string{io.kubernetes.container.hash: b0cc1ab0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c396e8dd7d523dfd35c94244a7251f38ed0cad19ac6899846665673c43865d80,PodSandboxId:399f3c1da2a2e151138217c49ae862113fbc32c2bdeeb0d4afd579c2aee17257,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770
106456697248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7b4629155c46ec82f88394148a4486,},Annotations:map[string]string{io.kubernetes.container.hash: 50fc8f6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d2f98f50-53da-406c-9392-6224030a3184 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.272458927Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6313dee1-f93c-41cc-9f1b-263d3fa9b8b4 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.272727623Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6313dee1-f93c-41cc-9f1b-263d3fa9b8b4 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.274497287Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c9089cb-6def-4d96-986a-af8bfd1f56c7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.274919123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770670274898014,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c9089cb-6def-4d96-986a-af8bfd1f56c7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.275935035Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba05f0b8-e00d-484e-8a3f-f874408f7889 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.276022275Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba05f0b8-e00d-484e-8a3f-f874408f7889 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.276530564Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dcf47324a868c557381ea72f8c3bc0dce56b1bab36def329bfcff91a5c25df6,PodSandboxId:887781373b9c6a80d1f5dab89fb5c714863ed9729ad1d4cccb48ca6e4237da58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770128202047530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0dfdeb1-f567-41df-98c3-7987f0fd7b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 909a6a7e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478da5c49960ae3d9ce8ceebfef95d983383433a511bacf4b880e14255fcf23c,PodSandboxId:3f372f18c0800c7cf582878db05ab3229c1abda392a8445ba0b71cd3bb79ea06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710770126180946114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pp8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912b3f56-3df6-485f-a01a-60801b867b86,},Annotations:map[string]string{io.kubernetes.container.hash: dc0ca493,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bd4c6331e90782afb37b345b8e00d754b69c3b3d838b6da08111b9ea14a5cc,PodSandboxId:6083b00f89dc2e3e8d73bc820422bb6be8042b49e1eb358b9c90a8b70469a590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126301795954,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xdcht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf264558-6c11-44c9-82d6-ea23aea43dc9,},Annotations:map[string]string{io.kubernetes.container.hash: b0cf4d2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf55fa946a658cdc020920c0be13e1ce323a9ab0292a9b403fd7059000c70e1,PodSandboxId:f032f63a719f8348105bb201a8b835af4542fe3e8587eb3012a775367c461378,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126116394809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5qxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 164d2cc3-0891-4fcd-81bd-
34d7cf0c691c,},Annotations:map[string]string{io.kubernetes.container.hash: f164053c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d2d764cf683c43937ed6f1a2a4bc29ed977ee4e55873ded3c1a90fc325a68e,PodSandboxId:0c4384bffb72e76b865b7d57a32f42eaa40e53c876b3b4f3532a009ffcde0ae6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171077010655550461
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaeb8888551fdf1fa66251dad57f99eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f3c8b45c0b31e1a68ecd382756153908da62307b99de3ac00e94a9f0592120,PodSandboxId:b5b2d5706af19ec3b6793f4101d3c0ce85e939385bf55146c9e55fe8c32b97ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107701064
71259756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be94838dec3ae56e7ccef51c225c25dd,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d4752fdf387dcccc84aca83aec73463459220ac32fdb3510bd94ae21684775,PodSandboxId:ac791bedc626582dbf0e787f2f5b5fbf9626704820c08067bf84d08856c3f972,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,Creat
edAt:1710770106475618604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef18e5c2f20506f583d8e1ef75e4966,},Annotations:map[string]string{io.kubernetes.container.hash: b0cc1ab0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c396e8dd7d523dfd35c94244a7251f38ed0cad19ac6899846665673c43865d80,PodSandboxId:399f3c1da2a2e151138217c49ae862113fbc32c2bdeeb0d4afd579c2aee17257,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770
106456697248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7b4629155c46ec82f88394148a4486,},Annotations:map[string]string{io.kubernetes.container.hash: 50fc8f6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba05f0b8-e00d-484e-8a3f-f874408f7889 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.316991478Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=229e7385-ebe4-49ee-b3a3-a4614f7ea631 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.317091296Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=229e7385-ebe4-49ee-b3a3-a4614f7ea631 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.318872820Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a816e801-0cd7-46d9-b0bf-dc014b4f0d13 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.319408160Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770670319379730,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a816e801-0cd7-46d9-b0bf-dc014b4f0d13 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.320026356Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e267f3cd-e149-49f6-ae9a-c554dc4f9fa5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.320089298Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e267f3cd-e149-49f6-ae9a-c554dc4f9fa5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.320452937Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dcf47324a868c557381ea72f8c3bc0dce56b1bab36def329bfcff91a5c25df6,PodSandboxId:887781373b9c6a80d1f5dab89fb5c714863ed9729ad1d4cccb48ca6e4237da58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770128202047530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0dfdeb1-f567-41df-98c3-7987f0fd7b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 909a6a7e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478da5c49960ae3d9ce8ceebfef95d983383433a511bacf4b880e14255fcf23c,PodSandboxId:3f372f18c0800c7cf582878db05ab3229c1abda392a8445ba0b71cd3bb79ea06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710770126180946114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pp8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912b3f56-3df6-485f-a01a-60801b867b86,},Annotations:map[string]string{io.kubernetes.container.hash: dc0ca493,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bd4c6331e90782afb37b345b8e00d754b69c3b3d838b6da08111b9ea14a5cc,PodSandboxId:6083b00f89dc2e3e8d73bc820422bb6be8042b49e1eb358b9c90a8b70469a590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126301795954,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xdcht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf264558-6c11-44c9-82d6-ea23aea43dc9,},Annotations:map[string]string{io.kubernetes.container.hash: b0cf4d2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf55fa946a658cdc020920c0be13e1ce323a9ab0292a9b403fd7059000c70e1,PodSandboxId:f032f63a719f8348105bb201a8b835af4542fe3e8587eb3012a775367c461378,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126116394809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5qxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 164d2cc3-0891-4fcd-81bd-
34d7cf0c691c,},Annotations:map[string]string{io.kubernetes.container.hash: f164053c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d2d764cf683c43937ed6f1a2a4bc29ed977ee4e55873ded3c1a90fc325a68e,PodSandboxId:0c4384bffb72e76b865b7d57a32f42eaa40e53c876b3b4f3532a009ffcde0ae6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171077010655550461
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaeb8888551fdf1fa66251dad57f99eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f3c8b45c0b31e1a68ecd382756153908da62307b99de3ac00e94a9f0592120,PodSandboxId:b5b2d5706af19ec3b6793f4101d3c0ce85e939385bf55146c9e55fe8c32b97ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107701064
71259756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be94838dec3ae56e7ccef51c225c25dd,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d4752fdf387dcccc84aca83aec73463459220ac32fdb3510bd94ae21684775,PodSandboxId:ac791bedc626582dbf0e787f2f5b5fbf9626704820c08067bf84d08856c3f972,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,Creat
edAt:1710770106475618604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef18e5c2f20506f583d8e1ef75e4966,},Annotations:map[string]string{io.kubernetes.container.hash: b0cc1ab0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c396e8dd7d523dfd35c94244a7251f38ed0cad19ac6899846665673c43865d80,PodSandboxId:399f3c1da2a2e151138217c49ae862113fbc32c2bdeeb0d4afd579c2aee17257,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770
106456697248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7b4629155c46ec82f88394148a4486,},Annotations:map[string]string{io.kubernetes.container.hash: 50fc8f6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e267f3cd-e149-49f6-ae9a-c554dc4f9fa5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.370600630Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=735f2064-c6ed-4fa0-9c69-2dd8f5fdc0db name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.370883068Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=735f2064-c6ed-4fa0-9c69-2dd8f5fdc0db name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.372450161Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d861be0-0844-49ef-bc7a-56f0abfeb360 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.372896123Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770670372872288,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d861be0-0844-49ef-bc7a-56f0abfeb360 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.373575585Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7c931354-38e5-4592-8b02-6a86b596d2cb name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.373721130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7c931354-38e5-4592-8b02-6a86b596d2cb name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:30 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:04:30.373991181Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dcf47324a868c557381ea72f8c3bc0dce56b1bab36def329bfcff91a5c25df6,PodSandboxId:887781373b9c6a80d1f5dab89fb5c714863ed9729ad1d4cccb48ca6e4237da58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770128202047530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0dfdeb1-f567-41df-98c3-7987f0fd7b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 909a6a7e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478da5c49960ae3d9ce8ceebfef95d983383433a511bacf4b880e14255fcf23c,PodSandboxId:3f372f18c0800c7cf582878db05ab3229c1abda392a8445ba0b71cd3bb79ea06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710770126180946114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pp8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912b3f56-3df6-485f-a01a-60801b867b86,},Annotations:map[string]string{io.kubernetes.container.hash: dc0ca493,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bd4c6331e90782afb37b345b8e00d754b69c3b3d838b6da08111b9ea14a5cc,PodSandboxId:6083b00f89dc2e3e8d73bc820422bb6be8042b49e1eb358b9c90a8b70469a590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126301795954,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xdcht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf264558-6c11-44c9-82d6-ea23aea43dc9,},Annotations:map[string]string{io.kubernetes.container.hash: b0cf4d2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf55fa946a658cdc020920c0be13e1ce323a9ab0292a9b403fd7059000c70e1,PodSandboxId:f032f63a719f8348105bb201a8b835af4542fe3e8587eb3012a775367c461378,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126116394809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5qxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 164d2cc3-0891-4fcd-81bd-
34d7cf0c691c,},Annotations:map[string]string{io.kubernetes.container.hash: f164053c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d2d764cf683c43937ed6f1a2a4bc29ed977ee4e55873ded3c1a90fc325a68e,PodSandboxId:0c4384bffb72e76b865b7d57a32f42eaa40e53c876b3b4f3532a009ffcde0ae6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171077010655550461
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaeb8888551fdf1fa66251dad57f99eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f3c8b45c0b31e1a68ecd382756153908da62307b99de3ac00e94a9f0592120,PodSandboxId:b5b2d5706af19ec3b6793f4101d3c0ce85e939385bf55146c9e55fe8c32b97ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107701064
71259756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be94838dec3ae56e7ccef51c225c25dd,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d4752fdf387dcccc84aca83aec73463459220ac32fdb3510bd94ae21684775,PodSandboxId:ac791bedc626582dbf0e787f2f5b5fbf9626704820c08067bf84d08856c3f972,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,Creat
edAt:1710770106475618604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef18e5c2f20506f583d8e1ef75e4966,},Annotations:map[string]string{io.kubernetes.container.hash: b0cc1ab0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c396e8dd7d523dfd35c94244a7251f38ed0cad19ac6899846665673c43865d80,PodSandboxId:399f3c1da2a2e151138217c49ae862113fbc32c2bdeeb0d4afd579c2aee17257,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770
106456697248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7b4629155c46ec82f88394148a4486,},Annotations:map[string]string{io.kubernetes.container.hash: 50fc8f6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7c931354-38e5-4592-8b02-6a86b596d2cb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9dcf47324a868       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   887781373b9c6       storage-provisioner
	19bd4c6331e90       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   6083b00f89dc2       coredns-5dd5756b68-xdcht
	478da5c49960a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   3f372f18c0800       kube-proxy-2pp8z
	caf55fa946a65       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   f032f63a719f8       coredns-5dd5756b68-j5qxm
	94d2d764cf683       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   0c4384bffb72e       kube-scheduler-default-k8s-diff-port-569210
	75d4752fdf387       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   ac791bedc6265       kube-apiserver-default-k8s-diff-port-569210
	14f3c8b45c0b3       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   b5b2d5706af19       kube-controller-manager-default-k8s-diff-port-569210
	c396e8dd7d523       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   399f3c1da2a2e       etcd-default-k8s-diff-port-569210
	
	
	==> coredns [19bd4c6331e90782afb37b345b8e00d754b69c3b3d838b6da08111b9ea14a5cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [caf55fa946a658cdc020920c0be13e1ce323a9ab0292a9b403fd7059000c70e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-569210
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-569210
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=default-k8s-diff-port-569210
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_55_13_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:55:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-569210
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:04:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:00:40 +0000   Mon, 18 Mar 2024 13:55:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:00:40 +0000   Mon, 18 Mar 2024 13:55:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:00:40 +0000   Mon, 18 Mar 2024 13:55:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:00:40 +0000   Mon, 18 Mar 2024 13:55:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.3
	  Hostname:    default-k8s-diff-port-569210
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 452594090f9f4e72aa58a6b8f1d38292
	  System UUID:                45259409-0f9f-4e72-aa58-a6b8f1d38292
	  Boot ID:                    81ff9704-4e6c-45fb-831e-9145078fe898
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-j5qxm                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-5dd5756b68-xdcht                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-default-k8s-diff-port-569210                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-569210             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m17s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-569210    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-2pp8z                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-default-k8s-diff-port-569210             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-57f55c9bc5-ng9ww                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m3s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node default-k8s-diff-port-569210 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node default-k8s-diff-port-569210 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node default-k8s-diff-port-569210 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m18s  kubelet          Node default-k8s-diff-port-569210 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m17s  kubelet          Node default-k8s-diff-port-569210 status is now: NodeReady
	  Normal  RegisteredNode           9m6s   node-controller  Node default-k8s-diff-port-569210 event: Registered Node default-k8s-diff-port-569210 in Controller
	
	
	==> dmesg <==
	[  +0.044329] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.048996] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.583982] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.733421] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar18 13:50] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.064007] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077519] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.199814] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.161254] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.296299] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +5.561320] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.067077] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.367037] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +4.556984] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.052086] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.796387] kauditd_printk_skb: 2 callbacks suppressed
	[Mar18 13:55] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.589350] systemd-fstab-generator[3427]: Ignoring "noauto" option for root device
	[  +4.530372] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.765669] systemd-fstab-generator[3752]: Ignoring "noauto" option for root device
	[ +12.441698] systemd-fstab-generator[3942]: Ignoring "noauto" option for root device
	[  +0.131416] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 13:56] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [c396e8dd7d523dfd35c94244a7251f38ed0cad19ac6899846665673c43865d80] <==
	{"level":"info","ts":"2024-03-18T13:55:06.898489Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf switched to configuration voters=(13648440408855156671)"}
	{"level":"info","ts":"2024-03-18T13:55:06.898668Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"bd78613cdcde8fe4","local-member-id":"bd69003d43e617bf","added-peer-id":"bd69003d43e617bf","added-peer-peer-urls":["https://192.168.61.3:2380"]}
	{"level":"info","ts":"2024-03-18T13:55:06.899487Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-18T13:55:06.903664Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.3:2380"}
	{"level":"info","ts":"2024-03-18T13:55:06.903952Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.3:2380"}
	{"level":"info","ts":"2024-03-18T13:55:06.904594Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"bd69003d43e617bf","initial-advertise-peer-urls":["https://192.168.61.3:2380"],"listen-peer-urls":["https://192.168.61.3:2380"],"advertise-client-urls":["https://192.168.61.3:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.3:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-18T13:55:06.904612Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-18T13:55:07.537296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T13:55:07.53741Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T13:55:07.537445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf received MsgPreVoteResp from bd69003d43e617bf at term 1"}
	{"level":"info","ts":"2024-03-18T13:55:07.537478Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:55:07.537509Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf received MsgVoteResp from bd69003d43e617bf at term 2"}
	{"level":"info","ts":"2024-03-18T13:55:07.537536Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bd69003d43e617bf became leader at term 2"}
	{"level":"info","ts":"2024-03-18T13:55:07.537561Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bd69003d43e617bf elected leader bd69003d43e617bf at term 2"}
	{"level":"info","ts":"2024-03-18T13:55:07.540414Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:55:07.54453Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"bd69003d43e617bf","local-member-attributes":"{Name:default-k8s-diff-port-569210 ClientURLs:[https://192.168.61.3:2379]}","request-path":"/0/members/bd69003d43e617bf/attributes","cluster-id":"bd78613cdcde8fe4","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:55:07.54461Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:55:07.54787Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:55:07.548243Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:55:07.551078Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"bd78613cdcde8fe4","local-member-id":"bd69003d43e617bf","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:55:07.551266Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:55:07.551314Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:55:07.558406Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.3:2379"}
	{"level":"info","ts":"2024-03-18T13:55:07.585232Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:55:07.585276Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:04:30 up 14 min,  0 users,  load average: 0.22, 0.35, 0.27
	Linux default-k8s-diff-port-569210 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [75d4752fdf387dcccc84aca83aec73463459220ac32fdb3510bd94ae21684775] <==
	W0318 14:00:10.565723       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:00:10.565867       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:00:10.565901       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:00:10.565770       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:00:10.566058       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:00:10.568081       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:01:09.452891       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:01:10.566814       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:01:10.566890       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:01:10.566902       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:01:10.570389       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:01:10.570515       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:01:10.570549       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:02:09.453036       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 14:03:09.453334       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:03:10.567100       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:03:10.567317       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:03:10.567350       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:03:10.571876       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:03:10.571969       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:03:10.572007       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:04:09.453582       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [14f3c8b45c0b31e1a68ecd382756153908da62307b99de3ac00e94a9f0592120] <==
	I0318 13:58:54.984541       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 13:59:24.569614       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 13:59:24.993646       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 13:59:54.574902       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 13:59:55.001952       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:00:24.581677       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:00:25.010141       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:00:54.589967       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:00:55.018583       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:01:24.595824       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:01:25.029672       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:01:29.882903       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="299.204µs"
	I0318 14:01:42.884706       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="112.674µs"
	E0318 14:01:54.602478       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:01:55.038563       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:02:24.609083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:02:25.048796       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:02:54.615080       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:02:55.058047       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:03:24.622128       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:03:25.068374       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:03:54.628161       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:03:55.079310       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:04:24.634667       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:04:25.090143       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [478da5c49960ae3d9ce8ceebfef95d983383433a511bacf4b880e14255fcf23c] <==
	I0318 13:55:26.905082       1 server_others.go:69] "Using iptables proxy"
	I0318 13:55:26.942887       1 node.go:141] Successfully retrieved node IP: 192.168.61.3
	I0318 13:55:27.115653       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:55:27.115704       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:55:27.142655       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:55:27.144075       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:55:27.144326       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:55:27.144339       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:55:27.158732       1 config.go:315] "Starting node config controller"
	I0318 13:55:27.158766       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:55:27.162325       1 config.go:188] "Starting service config controller"
	I0318 13:55:27.162415       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:55:27.162438       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:55:27.162442       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:55:27.259250       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:55:27.263490       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:55:27.263549       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [94d2d764cf683c43937ed6f1a2a4bc29ed977ee4e55873ded3c1a90fc325a68e] <==
	W0318 13:55:09.623938       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:55:09.624056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:55:09.624295       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:55:09.626801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:55:09.626004       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:55:09.627136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 13:55:09.626058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:55:09.627387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:55:09.626141       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:55:09.627477       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 13:55:09.626597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:55:09.627695       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:55:10.596785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:55:10.596884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:55:10.656032       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:55:10.656162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:55:10.747503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:55:10.747558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:55:10.777289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:55:10.777350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:55:10.783163       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:55:10.783266       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:55:10.797573       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 13:55:10.797628       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:55:12.601416       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:02:12 default-k8s-diff-port-569210 kubelet[3758]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:02:12 default-k8s-diff-port-569210 kubelet[3758]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:02:12 default-k8s-diff-port-569210 kubelet[3758]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:02:12 default-k8s-diff-port-569210 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:02:22 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:02:22.867993    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:02:35 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:02:35.864578    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:02:48 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:02:48.865306    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:02:59 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:02:59.863469    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:03:12 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:03:12.863716    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:03:12 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:03:12.917994    3758 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:03:12 default-k8s-diff-port-569210 kubelet[3758]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:03:12 default-k8s-diff-port-569210 kubelet[3758]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:03:12 default-k8s-diff-port-569210 kubelet[3758]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:03:12 default-k8s-diff-port-569210 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:03:24 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:03:24.865474    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:03:35 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:03:35.864793    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:03:48 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:03:48.864254    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:04:02 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:04:02.865497    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:04:12 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:04:12.913390    3758 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:04:12 default-k8s-diff-port-569210 kubelet[3758]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:04:12 default-k8s-diff-port-569210 kubelet[3758]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:04:12 default-k8s-diff-port-569210 kubelet[3758]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:04:12 default-k8s-diff-port-569210 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:04:15 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:04:15.863507    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:04:27 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:04:27.864264    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	
	
	==> storage-provisioner [9dcf47324a868c557381ea72f8c3bc0dce56b1bab36def329bfcff91a5c25df6] <==
	I0318 13:55:28.320506       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 13:55:28.336971       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 13:55:28.337009       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 13:55:28.356231       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 13:55:28.356418       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-569210_2f961dda-9106-4ac5-ba06-b638d34747c6!
	I0318 13:55:28.358832       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1ceb429c-3e85-449d-9d24-79a90659fe08", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-569210_2f961dda-9106-4ac5-ba06-b638d34747c6 became leader
	I0318 13:55:28.458973       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-569210_2f961dda-9106-4ac5-ba06-b638d34747c6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-569210 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ng9ww
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-569210 describe pod metrics-server-57f55c9bc5-ng9ww
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-569210 describe pod metrics-server-57f55c9bc5-ng9ww: exit status 1 (65.900819ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ng9ww" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-569210 describe pod metrics-server-57f55c9bc5-ng9ww: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0318 13:56:24.905612 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-173036 -n embed-certs-173036
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:04:54.540374497 +0000 UTC m=+6571.927288760
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-173036 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-173036 logs -n 25: (2.15107781s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-599578                           | kubernetes-upgrade-599578    | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:39 UTC |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-760389                                        | pause-760389                 | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:40 UTC |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-173866 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | disable-driver-mounts-173866                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:42 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-173036            | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-537236             | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC | 18 Mar 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-569210  | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC | 18 Mar 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-909137        | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-173036                 | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-537236                  | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-909137             | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-569210       | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:55 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:45:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:45:41.667747 1157887 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:45:41.667937 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.667952 1157887 out.go:304] Setting ErrFile to fd 2...
	I0318 13:45:41.667958 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.668616 1157887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:45:41.669251 1157887 out.go:298] Setting JSON to false
	I0318 13:45:41.670283 1157887 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19689,"bootTime":1710749853,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:45:41.670349 1157887 start.go:139] virtualization: kvm guest
	I0318 13:45:41.672702 1157887 out.go:177] * [default-k8s-diff-port-569210] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:45:41.674325 1157887 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:45:41.674336 1157887 notify.go:220] Checking for updates...
	I0318 13:45:41.675874 1157887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:45:41.677543 1157887 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:45:41.679053 1157887 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:45:41.680344 1157887 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:45:41.681702 1157887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:45:41.683304 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:45:41.683743 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.683792 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.698719 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0318 13:45:41.699154 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.699657 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.699676 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.699995 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.700168 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.700488 1157887 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:45:41.700763 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.700803 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.715824 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0318 13:45:41.716270 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.716688 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.716708 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.717004 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.717185 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.747564 1157887 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:45:41.748930 1157887 start.go:297] selected driver: kvm2
	I0318 13:45:41.748944 1157887 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.749059 1157887 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:45:41.749725 1157887 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.749819 1157887 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:45:41.764225 1157887 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:45:41.764607 1157887 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:45:41.764679 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:45:41.764692 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:45:41.764727 1157887 start.go:340] cluster config:
	{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.764824 1157887 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.766561 1157887 out.go:177] * Starting "default-k8s-diff-port-569210" primary control-plane node in "default-k8s-diff-port-569210" cluster
	I0318 13:45:40.044635 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:41.767747 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:45:41.767779 1157887 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:45:41.767799 1157887 cache.go:56] Caching tarball of preloaded images
	I0318 13:45:41.767876 1157887 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:45:41.767887 1157887 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:45:41.767986 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:45:41.768151 1157887 start.go:360] acquireMachinesLock for default-k8s-diff-port-569210: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:45:46.124607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:49.196561 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:55.276657 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:58.348606 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:04.428632 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:07.500592 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:13.584558 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:16.652578 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:22.732573 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:25.804745 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:31.884579 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:34.956708 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:41.036614 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:44.108576 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:50.188610 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:53.260646 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:59.340724 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:02.412698 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:08.492603 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:11.564634 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:17.644618 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:20.716642 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:26.796585 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:29.868690 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:35.948613 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:39.020607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:45.104563 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:48.172547 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:54.252608 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:57.324659 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:03.404600 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:06.476647 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:12.556609 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:15.628640 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:21.708597 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:24.780572 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:30.860662 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:33.932528 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:40.012616 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:43.084569 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:49.164622 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:52.236652 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:58.316619 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:49:01.321139 1157416 start.go:364] duration metric: took 4m21.279664055s to acquireMachinesLock for "no-preload-537236"
	I0318 13:49:01.321252 1157416 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:01.321260 1157416 fix.go:54] fixHost starting: 
	I0318 13:49:01.321627 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:01.321658 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:01.337337 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0318 13:49:01.337793 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:01.338235 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:49:01.338262 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:01.338703 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:01.338892 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:01.339025 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:49:01.340630 1157416 fix.go:112] recreateIfNeeded on no-preload-537236: state=Stopped err=<nil>
	I0318 13:49:01.340653 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	W0318 13:49:01.340785 1157416 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:01.342565 1157416 out.go:177] * Restarting existing kvm2 VM for "no-preload-537236" ...
	I0318 13:49:01.318340 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:01.318378 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.318795 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:49:01.318829 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.319041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:49:01.321007 1157263 machine.go:97] duration metric: took 4m37.382603693s to provisionDockerMachine
	I0318 13:49:01.321051 1157263 fix.go:56] duration metric: took 4m37.403420427s for fixHost
	I0318 13:49:01.321064 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 4m37.403446357s
	W0318 13:49:01.321088 1157263 start.go:713] error starting host: provision: host is not running
	W0318 13:49:01.321225 1157263 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 13:49:01.321242 1157263 start.go:728] Will try again in 5 seconds ...
	I0318 13:49:01.343844 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Start
	I0318 13:49:01.344003 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring networks are active...
	I0318 13:49:01.344698 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network default is active
	I0318 13:49:01.345062 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network mk-no-preload-537236 is active
	I0318 13:49:01.345378 1157416 main.go:141] libmachine: (no-preload-537236) Getting domain xml...
	I0318 13:49:01.346073 1157416 main.go:141] libmachine: (no-preload-537236) Creating domain...
	I0318 13:49:02.522163 1157416 main.go:141] libmachine: (no-preload-537236) Waiting to get IP...
	I0318 13:49:02.522935 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.523347 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.523420 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.523327 1158392 retry.go:31] will retry after 276.248352ms: waiting for machine to come up
	I0318 13:49:02.800962 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.801439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.801472 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.801381 1158392 retry.go:31] will retry after 318.94167ms: waiting for machine to come up
	I0318 13:49:03.121895 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.122276 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.122298 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.122254 1158392 retry.go:31] will retry after 353.742872ms: waiting for machine to come up
	I0318 13:49:03.477885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.478401 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.478439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.478360 1158392 retry.go:31] will retry after 481.537084ms: waiting for machine to come up
	I0318 13:49:03.960991 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.961432 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.961505 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.961416 1158392 retry.go:31] will retry after 647.244695ms: waiting for machine to come up
	I0318 13:49:04.610150 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:04.610563 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:04.610604 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:04.610512 1158392 retry.go:31] will retry after 577.22264ms: waiting for machine to come up
	I0318 13:49:06.321404 1157263 start.go:360] acquireMachinesLock for embed-certs-173036: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:49:05.189300 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:05.189688 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:05.189722 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:05.189635 1158392 retry.go:31] will retry after 1.064347528s: waiting for machine to come up
	I0318 13:49:06.255734 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:06.256071 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:06.256103 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:06.256016 1158392 retry.go:31] will retry after 1.359025709s: waiting for machine to come up
	I0318 13:49:07.616847 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:07.617313 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:07.617338 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:07.617265 1158392 retry.go:31] will retry after 1.844112s: waiting for machine to come up
	I0318 13:49:09.464239 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:09.464761 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:09.464788 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:09.464703 1158392 retry.go:31] will retry after 1.984375986s: waiting for machine to come up
	I0318 13:49:11.450609 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:11.451100 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:11.451153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:11.451037 1158392 retry.go:31] will retry after 1.944733714s: waiting for machine to come up
	I0318 13:49:13.397815 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:13.398238 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:13.398265 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:13.398190 1158392 retry.go:31] will retry after 2.44494826s: waiting for machine to come up
	I0318 13:49:15.845711 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:15.846169 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:15.846212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:15.846128 1158392 retry.go:31] will retry after 2.760857339s: waiting for machine to come up
	I0318 13:49:18.609516 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:18.609917 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:18.609942 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:18.609872 1158392 retry.go:31] will retry after 3.501792324s: waiting for machine to come up
	I0318 13:49:23.501689 1157708 start.go:364] duration metric: took 4m10.403284517s to acquireMachinesLock for "old-k8s-version-909137"
	I0318 13:49:23.501769 1157708 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:23.501783 1157708 fix.go:54] fixHost starting: 
	I0318 13:49:23.502238 1157708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:23.502279 1157708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:23.520223 1157708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0318 13:49:23.520696 1157708 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:23.521273 1157708 main.go:141] libmachine: Using API Version  1
	I0318 13:49:23.521304 1157708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:23.521693 1157708 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:23.521934 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:23.522089 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetState
	I0318 13:49:23.523696 1157708 fix.go:112] recreateIfNeeded on old-k8s-version-909137: state=Stopped err=<nil>
	I0318 13:49:23.523738 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	W0318 13:49:23.523894 1157708 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:23.526253 1157708 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-909137" ...
	I0318 13:49:22.113291 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.113733 1157416 main.go:141] libmachine: (no-preload-537236) Found IP for machine: 192.168.39.7
	I0318 13:49:22.113753 1157416 main.go:141] libmachine: (no-preload-537236) Reserving static IP address...
	I0318 13:49:22.113787 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has current primary IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.114159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.114179 1157416 main.go:141] libmachine: (no-preload-537236) DBG | skip adding static IP to network mk-no-preload-537236 - found existing host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"}
	I0318 13:49:22.114192 1157416 main.go:141] libmachine: (no-preload-537236) Reserved static IP address: 192.168.39.7
	I0318 13:49:22.114201 1157416 main.go:141] libmachine: (no-preload-537236) Waiting for SSH to be available...
	I0318 13:49:22.114208 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Getting to WaitForSSH function...
	I0318 13:49:22.116603 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.116944 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.116971 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.117082 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH client type: external
	I0318 13:49:22.117153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa (-rw-------)
	I0318 13:49:22.117192 1157416 main.go:141] libmachine: (no-preload-537236) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:22.117212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | About to run SSH command:
	I0318 13:49:22.117236 1157416 main.go:141] libmachine: (no-preload-537236) DBG | exit 0
	I0318 13:49:22.240543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:22.240913 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetConfigRaw
	I0318 13:49:22.241611 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.244016 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244273 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.244302 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244506 1157416 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/config.json ...
	I0318 13:49:22.244729 1157416 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:22.244750 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:22.244947 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.246869 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247160 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.247198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247246 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.247401 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247546 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247722 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.247893 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.248160 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.248174 1157416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:22.353134 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:22.353164 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353435 1157416 buildroot.go:166] provisioning hostname "no-preload-537236"
	I0318 13:49:22.353463 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353636 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.356058 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356463 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.356491 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356645 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.356846 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.356965 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.357068 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.357201 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.357415 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.357434 1157416 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-537236 && echo "no-preload-537236" | sudo tee /etc/hostname
	I0318 13:49:22.477651 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-537236
	
	I0318 13:49:22.477692 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.480537 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.480876 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.480905 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.481135 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.481342 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481520 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481676 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.481887 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.482066 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.482082 1157416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-537236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-537236/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-537236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:22.599489 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:22.599566 1157416 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:22.599596 1157416 buildroot.go:174] setting up certificates
	I0318 13:49:22.599609 1157416 provision.go:84] configureAuth start
	I0318 13:49:22.599624 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.599981 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.602425 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602800 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.602831 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602986 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.605036 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605331 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.605356 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605500 1157416 provision.go:143] copyHostCerts
	I0318 13:49:22.605589 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:22.605600 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:22.605665 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:22.605786 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:22.605795 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:22.605820 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:22.605895 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:22.605904 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:22.605927 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:22.606003 1157416 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.no-preload-537236 san=[127.0.0.1 192.168.39.7 localhost minikube no-preload-537236]
	I0318 13:49:22.810156 1157416 provision.go:177] copyRemoteCerts
	I0318 13:49:22.810249 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:22.810283 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.813018 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813343 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.813376 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813557 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.813743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.813890 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.814080 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:22.898886 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:22.926296 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 13:49:22.953260 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:49:22.981248 1157416 provision.go:87] duration metric: took 381.624842ms to configureAuth
	I0318 13:49:22.981281 1157416 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:22.981459 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:49:22.981573 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.984446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.984848 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.984885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.985061 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.985269 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985405 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985595 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.985728 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.985911 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.985925 1157416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:23.259439 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:23.259470 1157416 machine.go:97] duration metric: took 1.014725867s to provisionDockerMachine
	I0318 13:49:23.259483 1157416 start.go:293] postStartSetup for "no-preload-537236" (driver="kvm2")
	I0318 13:49:23.259518 1157416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:23.259553 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.259937 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:23.259976 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.262875 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263196 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.263228 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263403 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.263684 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.263861 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.264029 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.348815 1157416 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:23.353550 1157416 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:23.353582 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:23.353659 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:23.353759 1157416 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:23.353885 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:23.364831 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:23.391345 1157416 start.go:296] duration metric: took 131.846395ms for postStartSetup
	I0318 13:49:23.391396 1157416 fix.go:56] duration metric: took 22.070135111s for fixHost
	I0318 13:49:23.391423 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.394229 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.394583 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394685 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.394937 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395111 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395266 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.395433 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:23.395619 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:23.395631 1157416 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:23.501504 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769763.449975975
	
	I0318 13:49:23.501532 1157416 fix.go:216] guest clock: 1710769763.449975975
	I0318 13:49:23.501542 1157416 fix.go:229] Guest: 2024-03-18 13:49:23.449975975 +0000 UTC Remote: 2024-03-18 13:49:23.39140181 +0000 UTC m=+283.498114537 (delta=58.574165ms)
	I0318 13:49:23.501564 1157416 fix.go:200] guest clock delta is within tolerance: 58.574165ms
	I0318 13:49:23.501584 1157416 start.go:83] releasing machines lock for "no-preload-537236", held for 22.180386627s
	I0318 13:49:23.501612 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.501900 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:23.504693 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505130 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.505159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505331 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.505889 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506092 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506198 1157416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:23.506252 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.506317 1157416 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:23.506351 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.509104 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509414 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509465 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509625 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.509819 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509839 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509853 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510043 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510103 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.510207 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.510261 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510394 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510541 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.616831 1157416 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:23.624184 1157416 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:23.779709 1157416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:23.786535 1157416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:23.786594 1157416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:23.805716 1157416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:23.805743 1157416 start.go:494] detecting cgroup driver to use...
	I0318 13:49:23.805850 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:23.825572 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:23.842762 1157416 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:23.842817 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:23.859385 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:23.876416 1157416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:24.005995 1157416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:24.193107 1157416 docker.go:233] disabling docker service ...
	I0318 13:49:24.193173 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:24.212825 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:24.230448 1157416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:24.385445 1157416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:24.548640 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:24.564678 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:24.592528 1157416 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:49:24.592601 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.604303 1157416 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:24.604394 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.616123 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.627956 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.639194 1157416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:24.650789 1157416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:24.661390 1157416 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:24.661443 1157416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:24.677180 1157416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:24.687973 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:24.827386 1157416 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:24.978805 1157416 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:24.978898 1157416 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:24.985647 1157416 start.go:562] Will wait 60s for crictl version
	I0318 13:49:24.985735 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:24.990325 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:25.038948 1157416 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:25.039020 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.068855 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.107104 1157416 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 13:49:23.527811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .Start
	I0318 13:49:23.528000 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring networks are active...
	I0318 13:49:23.528714 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network default is active
	I0318 13:49:23.529036 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network mk-old-k8s-version-909137 is active
	I0318 13:49:23.529491 1157708 main.go:141] libmachine: (old-k8s-version-909137) Getting domain xml...
	I0318 13:49:23.530324 1157708 main.go:141] libmachine: (old-k8s-version-909137) Creating domain...
	I0318 13:49:24.765648 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting to get IP...
	I0318 13:49:24.766664 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:24.767122 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:24.767182 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:24.767081 1158507 retry.go:31] will retry after 250.785143ms: waiting for machine to come up
	I0318 13:49:25.019755 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.020238 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.020273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.020185 1158507 retry.go:31] will retry after 346.894257ms: waiting for machine to come up
	I0318 13:49:25.368815 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.369335 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.369372 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.369268 1158507 retry.go:31] will retry after 367.316359ms: waiting for machine to come up
	I0318 13:49:25.737835 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.738404 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.738438 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.738337 1158507 retry.go:31] will retry after 479.291041ms: waiting for machine to come up
	I0318 13:49:26.219103 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.219568 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.219599 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.219523 1158507 retry.go:31] will retry after 552.309382ms: waiting for machine to come up
	I0318 13:49:26.773363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.773905 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.773935 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.773857 1158507 retry.go:31] will retry after 703.087388ms: waiting for machine to come up
	I0318 13:49:27.478730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:27.479330 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:27.479363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:27.479270 1158507 retry.go:31] will retry after 1.136606935s: waiting for machine to come up
	I0318 13:49:25.108504 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:25.111416 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.111795 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:25.111827 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.112035 1157416 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:25.116688 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:25.131526 1157416 kubeadm.go:877] updating cluster {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:25.131663 1157416 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 13:49:25.131698 1157416 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:25.176340 1157416 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 13:49:25.176378 1157416 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:25.176474 1157416 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.176487 1157416 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.176524 1157416 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.176537 1157416 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.176592 1157416 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.176619 1157416 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.176773 1157416 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 13:49:25.176789 1157416 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178485 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.178486 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.178488 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.178480 1157416 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.178540 1157416 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 13:49:25.178911 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334172 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334873 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 13:49:25.338330 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.338825 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.340192 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.350053 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.356621 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.472528 1157416 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 13:49:25.472571 1157416 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.472627 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.630923 1157416 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 13:49:25.630996 1157416 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.631001 1157416 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 13:49:25.631042 1157416 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.630933 1157416 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 13:49:25.631089 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631102 1157416 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 13:49:25.631134 1157416 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.631107 1157416 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.631169 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631183 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631052 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631199 1157416 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 13:49:25.631220 1157416 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.631233 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.631264 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.642598 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.708001 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.708026 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708068 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.708003 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.708129 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708162 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.708225 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.708286 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.790492 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.790623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.804436 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804465 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804503 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 13:49:25.804532 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804583 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:25.804657 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 13:49:25.804684 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804720 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804768 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804801 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:25.807681 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 13:49:26.162719 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.887846 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.083277557s)
	I0318 13:49:27.887882 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.083274384s)
	I0318 13:49:27.887894 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 13:49:27.887916 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 13:49:27.887927 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.887944 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.083121634s)
	I0318 13:49:27.887971 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 13:49:27.887971 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.083181595s)
	I0318 13:49:27.887990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 13:49:27.888003 1157416 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.725256044s)
	I0318 13:49:27.888008 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.888040 1157416 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 13:49:27.888080 1157416 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.888114 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:27.893415 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:28.617273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:28.617711 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:28.617740 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:28.617665 1158507 retry.go:31] will retry after 947.818334ms: waiting for machine to come up
	I0318 13:49:29.566814 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:29.567157 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:29.567177 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:29.567121 1158507 retry.go:31] will retry after 1.328243934s: waiting for machine to come up
	I0318 13:49:30.897514 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:30.898041 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:30.898068 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:30.897988 1158507 retry.go:31] will retry after 2.213855703s: waiting for machine to come up
	I0318 13:49:30.272393 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.384351202s)
	I0318 13:49:30.272442 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 13:49:30.272459 1157416 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.379011748s)
	I0318 13:49:30.272477 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272508 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 13:49:30.272589 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:32.857821 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.585192694s)
	I0318 13:49:32.857907 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.585263486s)
	I0318 13:49:32.857990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 13:49:32.857918 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 13:49:32.858038 1157416 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:32.858097 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:33.113781 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:33.114303 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:33.114332 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:33.114245 1158507 retry.go:31] will retry after 2.075415123s: waiting for machine to come up
	I0318 13:49:35.191096 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:35.191631 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:35.191665 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:35.191582 1158507 retry.go:31] will retry after 3.520577528s: waiting for machine to come up
	I0318 13:49:36.677356 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.8192286s)
	I0318 13:49:36.677398 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 13:49:36.677423 1157416 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:36.677464 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:38.844843 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.167353366s)
	I0318 13:49:38.844895 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 13:49:38.844933 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.845020 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.713777 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:38.714129 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:38.714242 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:38.714143 1158507 retry.go:31] will retry after 3.46520277s: waiting for machine to come up
	I0318 13:49:42.181399 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181856 1157708 main.go:141] libmachine: (old-k8s-version-909137) Found IP for machine: 192.168.72.135
	I0318 13:49:42.181888 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has current primary IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181897 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserving static IP address...
	I0318 13:49:42.182344 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.182387 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | skip adding static IP to network mk-old-k8s-version-909137 - found existing host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"}
	I0318 13:49:42.182424 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserved static IP address: 192.168.72.135
	I0318 13:49:42.182453 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting for SSH to be available...
	I0318 13:49:42.182470 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Getting to WaitForSSH function...
	I0318 13:49:42.184589 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.184958 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.184999 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.185061 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH client type: external
	I0318 13:49:42.185120 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa (-rw-------)
	I0318 13:49:42.185162 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:42.185189 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | About to run SSH command:
	I0318 13:49:42.185204 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | exit 0
	I0318 13:49:42.312570 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:42.313005 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetConfigRaw
	I0318 13:49:42.313693 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.316497 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.316931 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.316965 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.317239 1157708 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json ...
	I0318 13:49:42.317442 1157708 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:42.317462 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:42.317688 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.320076 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320444 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.320485 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320655 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.320818 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.320980 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.321093 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.321257 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.321510 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.321528 1157708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:42.433138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:42.433186 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433524 1157708 buildroot.go:166] provisioning hostname "old-k8s-version-909137"
	I0318 13:49:42.433558 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433808 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.436869 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437230 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.437264 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437506 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.437739 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.437915 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.438092 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.438285 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.438513 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.438534 1157708 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-909137 && echo "old-k8s-version-909137" | sudo tee /etc/hostname
	I0318 13:49:42.560410 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-909137
	
	I0318 13:49:42.560439 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.563304 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563637 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.563673 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563837 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.564053 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564236 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564377 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.564581 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.564802 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.564820 1157708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-909137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-909137/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-909137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:42.687138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:42.687173 1157708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:42.687199 1157708 buildroot.go:174] setting up certificates
	I0318 13:49:42.687211 1157708 provision.go:84] configureAuth start
	I0318 13:49:42.687223 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.687600 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.690738 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.691179 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691316 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.693730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694070 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.694092 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694255 1157708 provision.go:143] copyHostCerts
	I0318 13:49:42.694336 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:42.694350 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:42.694422 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:42.694597 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:42.694614 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:42.694652 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:42.694747 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:42.694756 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:42.694775 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:42.694823 1157708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-909137 san=[127.0.0.1 192.168.72.135 localhost minikube old-k8s-version-909137]
	I0318 13:49:42.920182 1157708 provision.go:177] copyRemoteCerts
	I0318 13:49:42.920255 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:42.920295 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.923074 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923374 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.923408 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923533 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.923755 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.923957 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.924095 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.649771 1157887 start.go:364] duration metric: took 4m1.881584436s to acquireMachinesLock for "default-k8s-diff-port-569210"
	I0318 13:49:43.649850 1157887 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:43.649868 1157887 fix.go:54] fixHost starting: 
	I0318 13:49:43.650335 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:43.650378 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:43.668606 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0318 13:49:43.669107 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:43.669721 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:49:43.669755 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:43.670092 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:43.670269 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:49:43.670427 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:49:43.671973 1157887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-569210: state=Stopped err=<nil>
	I0318 13:49:43.672021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	W0318 13:49:43.672150 1157887 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:43.673832 1157887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-569210" ...
	I0318 13:49:40.621208 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.776156882s)
	I0318 13:49:40.621252 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 13:49:40.621281 1157416 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:40.621322 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:41.582256 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 13:49:41.582316 1157416 cache_images.go:123] Successfully loaded all cached images
	I0318 13:49:41.582324 1157416 cache_images.go:92] duration metric: took 16.405930257s to LoadCachedImages
	I0318 13:49:41.582341 1157416 kubeadm.go:928] updating node { 192.168.39.7 8443 v1.29.0-rc.2 crio true true} ...
	I0318 13:49:41.582550 1157416 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-537236 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:41.582663 1157416 ssh_runner.go:195] Run: crio config
	I0318 13:49:41.635043 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:41.635074 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:41.635093 1157416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:41.635128 1157416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-537236 NodeName:no-preload-537236 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:49:41.635322 1157416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-537236"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:41.635446 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 13:49:41.647072 1157416 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:41.647148 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:41.657448 1157416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0318 13:49:41.675819 1157416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 13:49:41.693989 1157416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0318 13:49:41.714954 1157416 ssh_runner.go:195] Run: grep 192.168.39.7	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:41.719161 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:41.732228 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:41.871286 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:41.892827 1157416 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236 for IP: 192.168.39.7
	I0318 13:49:41.892850 1157416 certs.go:194] generating shared ca certs ...
	I0318 13:49:41.892868 1157416 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:41.893054 1157416 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:41.893110 1157416 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:41.893125 1157416 certs.go:256] generating profile certs ...
	I0318 13:49:41.893246 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/client.key
	I0318 13:49:41.893317 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key.844e83a6
	I0318 13:49:41.893366 1157416 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key
	I0318 13:49:41.893482 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:41.893518 1157416 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:41.893528 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:41.893552 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:41.893573 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:41.893594 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:41.893628 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:41.894503 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:41.942278 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:41.978436 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:42.007161 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:42.036410 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:49:42.073179 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:42.098201 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:42.131599 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:42.159159 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:42.186290 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:42.214362 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:42.241240 1157416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:42.260511 1157416 ssh_runner.go:195] Run: openssl version
	I0318 13:49:42.267047 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:42.278582 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283566 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283609 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.289658 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:42.300954 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:42.312828 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319182 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319251 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.325767 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:42.337544 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:42.349053 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354197 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354249 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.361200 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:42.374825 1157416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:42.380098 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:42.387161 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:42.393702 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:42.400193 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:42.406243 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:42.412423 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:42.418599 1157416 kubeadm.go:391] StartCluster: {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:42.418747 1157416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:42.418785 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.468980 1157416 cri.go:89] found id: ""
	I0318 13:49:42.469088 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:42.481101 1157416 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:42.481130 1157416 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:42.481137 1157416 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:42.481190 1157416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:42.493014 1157416 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:42.494041 1157416 kubeconfig.go:125] found "no-preload-537236" server: "https://192.168.39.7:8443"
	I0318 13:49:42.496519 1157416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:42.507415 1157416 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.7
	I0318 13:49:42.507448 1157416 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:42.507460 1157416 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:42.507513 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.554791 1157416 cri.go:89] found id: ""
	I0318 13:49:42.554859 1157416 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:42.574054 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:42.584928 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:42.584955 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:42.585009 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:42.594987 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:42.595045 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:42.605058 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:42.614968 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:42.615042 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:42.625169 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.634838 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:42.634905 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.644785 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:42.654196 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:42.654254 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:42.663757 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:42.673956 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:42.792913 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:43.799012 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.006050828s)
	I0318 13:49:43.799075 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.061808 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.189349 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.329800 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:44.329897 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:44.829990 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:43.007024 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:43.033952 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 13:49:43.060218 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:49:43.086087 1157708 provision.go:87] duration metric: took 398.861833ms to configureAuth
	I0318 13:49:43.086116 1157708 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:43.086326 1157708 config.go:182] Loaded profile config "old-k8s-version-909137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 13:49:43.086442 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.089200 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089534 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.089562 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089758 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.089965 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090134 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090286 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.090501 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.090718 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.090744 1157708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:43.401681 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:43.401715 1157708 machine.go:97] duration metric: took 1.084258164s to provisionDockerMachine
	I0318 13:49:43.401728 1157708 start.go:293] postStartSetup for "old-k8s-version-909137" (driver="kvm2")
	I0318 13:49:43.401739 1157708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:43.401759 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.402073 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:43.402116 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.404775 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405164 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.405192 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405335 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.405525 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.405740 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.405884 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.493000 1157708 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:43.497705 1157708 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:43.497740 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:43.497818 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:43.497931 1157708 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:43.498058 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:43.509185 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:43.535401 1157708 start.go:296] duration metric: took 133.657179ms for postStartSetup
	I0318 13:49:43.535454 1157708 fix.go:56] duration metric: took 20.033670705s for fixHost
	I0318 13:49:43.535482 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.538464 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.538964 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.538998 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.539178 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.539386 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539528 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539702 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.539899 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.540120 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.540133 1157708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:43.649578 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769783.596310102
	
	I0318 13:49:43.649610 1157708 fix.go:216] guest clock: 1710769783.596310102
	I0318 13:49:43.649621 1157708 fix.go:229] Guest: 2024-03-18 13:49:43.596310102 +0000 UTC Remote: 2024-03-18 13:49:43.535459129 +0000 UTC m=+270.592972067 (delta=60.850973ms)
	I0318 13:49:43.649656 1157708 fix.go:200] guest clock delta is within tolerance: 60.850973ms
	I0318 13:49:43.649663 1157708 start.go:83] releasing machines lock for "old-k8s-version-909137", held for 20.147918331s
	I0318 13:49:43.649689 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.650002 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:43.652712 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653114 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.653148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653278 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.653873 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654112 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654198 1157708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:43.654264 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.654333 1157708 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:43.654369 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.657281 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657390 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657741 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.657830 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657855 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657918 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.658016 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658065 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.658199 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658245 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658326 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.658411 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658574 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.737787 1157708 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:43.769157 1157708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:43.920376 1157708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:43.928165 1157708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:43.928253 1157708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:43.946102 1157708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:43.946133 1157708 start.go:494] detecting cgroup driver to use...
	I0318 13:49:43.946210 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:43.963482 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:43.978540 1157708 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:43.978613 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:43.999525 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:44.021242 1157708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:44.198165 1157708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:44.363408 1157708 docker.go:233] disabling docker service ...
	I0318 13:49:44.363474 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:44.383527 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:44.398888 1157708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:44.547711 1157708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:44.662762 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:44.678786 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:44.702931 1157708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 13:49:44.703004 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.721453 1157708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:44.721519 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.739487 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.757379 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.777508 1157708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:44.798788 1157708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:44.814280 1157708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:44.814383 1157708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:44.836507 1157708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:44.852614 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:44.994352 1157708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:45.184815 1157708 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:45.184907 1157708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:45.190649 1157708 start.go:562] Will wait 60s for crictl version
	I0318 13:49:45.190724 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:45.195265 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:45.242737 1157708 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:45.242850 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.288154 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.331441 1157708 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 13:49:43.675531 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Start
	I0318 13:49:43.675763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring networks are active...
	I0318 13:49:43.676642 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network default is active
	I0318 13:49:43.677014 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network mk-default-k8s-diff-port-569210 is active
	I0318 13:49:43.677510 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Getting domain xml...
	I0318 13:49:43.678319 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Creating domain...
	I0318 13:49:45.002977 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting to get IP...
	I0318 13:49:45.003870 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004406 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004499 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.004392 1158648 retry.go:31] will retry after 294.950888ms: waiting for machine to come up
	I0318 13:49:45.301264 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301835 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301863 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.301747 1158648 retry.go:31] will retry after 291.810051ms: waiting for machine to come up
	I0318 13:49:45.595571 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596720 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596832 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.596786 1158648 retry.go:31] will retry after 390.232445ms: waiting for machine to come up
	I0318 13:49:45.988661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989534 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.989393 1158648 retry.go:31] will retry after 487.148784ms: waiting for machine to come up
	I0318 13:49:46.477982 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478667 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478701 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.478600 1158648 retry.go:31] will retry after 474.795485ms: waiting for machine to come up
	I0318 13:49:45.332975 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:45.336274 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336701 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:45.336753 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336985 1157708 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:45.343147 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:45.361840 1157708 kubeadm.go:877] updating cluster {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:45.361982 1157708 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:49:45.362040 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:45.419490 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:45.419587 1157708 ssh_runner.go:195] Run: which lz4
	I0318 13:49:45.424689 1157708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:49:45.431110 1157708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:49:45.431155 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 13:49:47.510385 1157708 crio.go:444] duration metric: took 2.085724633s to copy over tarball
	I0318 13:49:47.510483 1157708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:49:45.330925 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:45.364854 1157416 api_server.go:72] duration metric: took 1.035057096s to wait for apiserver process to appear ...
	I0318 13:49:45.364883 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:49:45.364927 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:45.365577 1157416 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": dial tcp 192.168.39.7:8443: connect: connection refused
	I0318 13:49:45.865126 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.135799 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.135840 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.135862 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.154112 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.154142 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.365566 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.375812 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.375862 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:49.865027 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.873132 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.873176 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.365178 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.371461 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.371506 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.865038 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.870329 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.870383 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:51.365030 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:51.370284 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:49:51.379599 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:49:51.379633 1157416 api_server.go:131] duration metric: took 6.014741397s to wait for apiserver health ...
	I0318 13:49:51.379645 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:51.379654 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:51.582399 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:49:46.955128 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955620 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955649 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.955579 1158648 retry.go:31] will retry after 817.278037ms: waiting for machine to come up
	I0318 13:49:47.774954 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775449 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775480 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:47.775391 1158648 retry.go:31] will retry after 1.032655883s: waiting for machine to come up
	I0318 13:49:48.810156 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810699 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810730 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:48.810644 1158648 retry.go:31] will retry after 1.1441145s: waiting for machine to come up
	I0318 13:49:49.956702 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957179 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957214 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:49.957105 1158648 retry.go:31] will retry after 1.428592019s: waiting for machine to come up
	I0318 13:49:51.387025 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387627 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387660 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:51.387555 1158648 retry.go:31] will retry after 2.266795202s: waiting for machine to come up
	I0318 13:49:50.947045 1157708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.436514023s)
	I0318 13:49:50.947084 1157708 crio.go:451] duration metric: took 3.436661543s to extract the tarball
	I0318 13:49:50.947095 1157708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:49:51.007406 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:51.048060 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:51.048091 1157708 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:51.048181 1157708 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.048228 1157708 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.048287 1157708 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.048346 1157708 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 13:49:51.048398 1157708 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.048432 1157708 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.048232 1157708 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.048183 1157708 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.049960 1157708 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.050268 1157708 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.050288 1157708 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.050355 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.050594 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.050627 1157708 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 13:49:51.050584 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.051230 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.219906 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.220734 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.235283 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.236445 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.246700 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 13:49:51.251299 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.311054 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.311292 1157708 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 13:49:51.311336 1157708 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.311389 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.343594 1157708 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 13:49:51.343649 1157708 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.343739 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.391608 1157708 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 13:49:51.391657 1157708 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.391706 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.448987 1157708 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 13:49:51.449029 1157708 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 13:49:51.449058 1157708 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.449061 1157708 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 13:49:51.449088 1157708 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.449103 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449035 1157708 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 13:49:51.449135 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.449178 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449207 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.449245 1157708 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 13:49:51.449267 1157708 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.449317 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449210 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449223 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.469614 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.469613 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.562455 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 13:49:51.562506 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.564170 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 13:49:51.564269 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 13:49:51.578471 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 13:49:51.615689 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 13:49:51.615708 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 13:49:51.657287 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 13:49:51.657361 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 13:49:51.956746 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:52.106933 1157708 cache_images.go:92] duration metric: took 1.058823514s to LoadCachedImages
	W0318 13:49:52.107046 1157708 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0318 13:49:52.107064 1157708 kubeadm.go:928] updating node { 192.168.72.135 8443 v1.20.0 crio true true} ...
	I0318 13:49:52.107259 1157708 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-909137 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:52.107348 1157708 ssh_runner.go:195] Run: crio config
	I0318 13:49:52.163493 1157708 cni.go:84] Creating CNI manager for ""
	I0318 13:49:52.163526 1157708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:52.163546 1157708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:52.163572 1157708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.135 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-909137 NodeName:old-k8s-version-909137 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 13:49:52.163740 1157708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-909137"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:52.163818 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 13:49:52.175668 1157708 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:52.175740 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:52.186745 1157708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 13:49:52.209877 1157708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:49:52.232921 1157708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 13:49:52.256571 1157708 ssh_runner.go:195] Run: grep 192.168.72.135	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:52.262776 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:52.278435 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:52.422705 1157708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:52.443710 1157708 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137 for IP: 192.168.72.135
	I0318 13:49:52.443740 1157708 certs.go:194] generating shared ca certs ...
	I0318 13:49:52.443760 1157708 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:52.443951 1157708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:52.444009 1157708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:52.444023 1157708 certs.go:256] generating profile certs ...
	I0318 13:49:52.444155 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.key
	I0318 13:49:52.444239 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key.e9806bd6
	I0318 13:49:52.444303 1157708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key
	I0318 13:49:52.444492 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:52.444532 1157708 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:52.444548 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:52.444585 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:52.444633 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:52.444672 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:52.444729 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:52.445363 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:52.506720 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:52.550057 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:52.586845 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:52.627933 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 13:49:52.681479 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:52.722052 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:52.755021 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:52.782181 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:52.808269 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:52.835041 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:52.863776 1157708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:52.883579 1157708 ssh_runner.go:195] Run: openssl version
	I0318 13:49:52.889846 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:52.902288 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908241 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908302 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.915392 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:52.928374 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:52.941444 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946463 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946514 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.953447 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:52.966231 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:52.977986 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982748 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982809 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.988715 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:51.626774 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:49:51.642685 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:49:51.669902 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:49:51.759474 1157416 system_pods.go:59] 8 kube-system pods found
	I0318 13:49:51.759519 1157416 system_pods.go:61] "coredns-76f75df574-kxzfm" [d0aad76d-f135-4d4a-a2f5-117707b4b2f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:49:51.759530 1157416 system_pods.go:61] "etcd-no-preload-537236" [d02ad01c-1b16-4b97-be18-237b1cbfe3aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:49:51.759539 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [00b05050-229b-47f4-9af2-12be1711200a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:49:51.759548 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [3e7b86df-4111-4bd9-8925-a22cf12e10ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:49:51.759552 1157416 system_pods.go:61] "kube-proxy-5dspp" [adee19a0-eeb6-438f-a55d-30f1e1d87ef6] Running
	I0318 13:49:51.759557 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [17628d51-80f5-4985-8ddb-151cab8f8c5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:49:51.759562 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-hhh5m" [282de489-beee-47a9-bd29-5da43cf70146] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:49:51.759565 1157416 system_pods.go:61] "storage-provisioner" [97d3de68-0863-4bba-9cb1-2ce98d791935] Running
	I0318 13:49:51.759578 1157416 system_pods.go:74] duration metric: took 89.654007ms to wait for pod list to return data ...
	I0318 13:49:51.759591 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:49:51.764164 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:49:51.764191 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:49:51.764204 1157416 node_conditions.go:105] duration metric: took 4.607295ms to run NodePressure ...
	I0318 13:49:51.764227 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:52.645812 1157416 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653573 1157416 kubeadm.go:733] kubelet initialised
	I0318 13:49:52.653602 1157416 kubeadm.go:734] duration metric: took 7.75557ms waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653614 1157416 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:49:52.662179 1157416 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:54.678656 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:53.656476 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656913 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656943 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:53.656870 1158648 retry.go:31] will retry after 2.341702781s: waiting for machine to come up
	I0318 13:49:56.001662 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002163 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002188 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:56.002106 1158648 retry.go:31] will retry after 2.885262489s: waiting for machine to come up
	I0318 13:49:53.000141 1157708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:53.005021 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:53.011156 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:53.018329 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:53.025687 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:53.032199 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:53.039048 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:53.045789 1157708 kubeadm.go:391] StartCluster: {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:53.045882 1157708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:53.045931 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.085682 1157708 cri.go:89] found id: ""
	I0318 13:49:53.085788 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:53.098063 1157708 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:53.098091 1157708 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:53.098098 1157708 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:53.098153 1157708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:53.109692 1157708 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:53.110853 1157708 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-909137" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:49:53.111862 1157708 kubeconfig.go:62] /home/jenkins/minikube-integration/18429-1106816/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-909137" cluster setting kubeconfig missing "old-k8s-version-909137" context setting]
	I0318 13:49:53.113334 1157708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:53.115135 1157708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:53.125910 1157708 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.135
	I0318 13:49:53.125949 1157708 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:53.125965 1157708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:53.126029 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.172181 1157708 cri.go:89] found id: ""
	I0318 13:49:53.172268 1157708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:53.189585 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:53.200744 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:53.200768 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:53.200811 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:53.211176 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:53.211250 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:53.221744 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:53.231342 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:53.231404 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:53.242162 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.252408 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:53.252480 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.262690 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:53.272829 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:53.272903 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:53.283287 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:53.294124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:53.437482 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.297415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.588919 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.758204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.863030 1157708 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:54.863140 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.363708 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.863301 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.364064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.863896 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.363240 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.212652 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:57.669562 1157416 pod_ready.go:92] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:57.669584 1157416 pod_ready.go:81] duration metric: took 5.007366512s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:57.669597 1157416 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176528 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:58.176557 1157416 pod_ready.go:81] duration metric: took 506.95201ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176570 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.888400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888706 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888742 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:58.888681 1158648 retry.go:31] will retry after 4.094701536s: waiting for machine to come up
	I0318 13:49:58.363294 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:58.864051 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.363586 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.863802 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.363862 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.864277 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.363381 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.864307 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.363278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.863315 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.309987 1157263 start.go:364] duration metric: took 57.988518292s to acquireMachinesLock for "embed-certs-173036"
	I0318 13:50:04.310046 1157263 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:50:04.310062 1157263 fix.go:54] fixHost starting: 
	I0318 13:50:04.310469 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:50:04.310506 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:50:04.330585 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0318 13:50:04.331049 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:50:04.331648 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:50:04.331684 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:50:04.332066 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:50:04.332316 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:04.332513 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:50:04.334091 1157263 fix.go:112] recreateIfNeeded on embed-certs-173036: state=Stopped err=<nil>
	I0318 13:50:04.334117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	W0318 13:50:04.334299 1157263 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:50:04.336146 1157263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-173036" ...
	I0318 13:50:00.184168 1157416 pod_ready.go:102] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:01.183846 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:01.183872 1157416 pod_ready.go:81] duration metric: took 3.007292631s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:01.183884 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:03.206725 1157416 pod_ready.go:102] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:04.691357 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.691391 1157416 pod_ready.go:81] duration metric: took 3.507497259s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.691410 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696593 1157416 pod_ready.go:92] pod "kube-proxy-5dspp" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.696618 1157416 pod_ready.go:81] duration metric: took 5.198628ms for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696627 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.700977 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.700995 1157416 pod_ready.go:81] duration metric: took 4.36095ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.701006 1157416 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:02.985340 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985804 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has current primary IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985818 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Found IP for machine: 192.168.61.3
	I0318 13:50:02.985828 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserving static IP address...
	I0318 13:50:02.986233 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.986292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | skip adding static IP to network mk-default-k8s-diff-port-569210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"}
	I0318 13:50:02.986307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserved static IP address: 192.168.61.3
	I0318 13:50:02.986321 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for SSH to be available...
	I0318 13:50:02.986337 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Getting to WaitForSSH function...
	I0318 13:50:02.988609 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.988962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.988995 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.989209 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH client type: external
	I0318 13:50:02.989235 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa (-rw-------)
	I0318 13:50:02.989272 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:02.989293 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | About to run SSH command:
	I0318 13:50:02.989306 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | exit 0
	I0318 13:50:03.112557 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:03.112907 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetConfigRaw
	I0318 13:50:03.113605 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.116140 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116569 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.116599 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116858 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:50:03.117065 1157887 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:03.117091 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:03.117296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.119506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.119861 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.119891 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.120015 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.120212 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120429 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120608 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.120798 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.120995 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.121010 1157887 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:03.221645 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:03.221693 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.221990 1157887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-569210"
	I0318 13:50:03.222027 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.222257 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.225134 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225543 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.225568 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225714 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.226022 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226225 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.226595 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.226870 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.226893 1157887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-569210 && echo "default-k8s-diff-port-569210" | sudo tee /etc/hostname
	I0318 13:50:03.350362 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-569210
	
	I0318 13:50:03.350398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.353307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353700 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.353737 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353911 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.354111 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354283 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354413 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.354600 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.354805 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.354824 1157887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-569210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-569210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-569210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:03.471084 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:03.471120 1157887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:03.471159 1157887 buildroot.go:174] setting up certificates
	I0318 13:50:03.471229 1157887 provision.go:84] configureAuth start
	I0318 13:50:03.471247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.471576 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.474528 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.474918 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.474957 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.475210 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.477624 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.477910 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.477936 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.478118 1157887 provision.go:143] copyHostCerts
	I0318 13:50:03.478196 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:03.478213 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:03.478281 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:03.478424 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:03.478437 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:03.478466 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:03.478537 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:03.478548 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:03.478571 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:03.478640 1157887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-569210 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-569210 localhost minikube]
	I0318 13:50:03.600956 1157887 provision.go:177] copyRemoteCerts
	I0318 13:50:03.601028 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:03.601058 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.603986 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604437 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.604468 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604659 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.604922 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.605086 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.605260 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:03.688256 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 13:50:03.716748 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:03.744848 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:03.771601 1157887 provision.go:87] duration metric: took 300.358039ms to configureAuth
	I0318 13:50:03.771631 1157887 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:03.771893 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:03.771992 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.774410 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774725 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.774760 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774926 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.775099 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775456 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.775642 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.775872 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.775901 1157887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:04.068202 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:04.068242 1157887 machine.go:97] duration metric: took 951.160051ms to provisionDockerMachine
	I0318 13:50:04.068259 1157887 start.go:293] postStartSetup for "default-k8s-diff-port-569210" (driver="kvm2")
	I0318 13:50:04.068277 1157887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:04.068303 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.068677 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:04.068712 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.071619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.071974 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.072002 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.072148 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.072354 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.072519 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.072639 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.157469 1157887 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:04.162629 1157887 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:04.162655 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:04.162719 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:04.162810 1157887 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:04.162911 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:04.173898 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:04.204771 1157887 start.go:296] duration metric: took 136.495479ms for postStartSetup
	I0318 13:50:04.204814 1157887 fix.go:56] duration metric: took 20.554947186s for fixHost
	I0318 13:50:04.204839 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.207619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.207923 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.207951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.208088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.208296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208509 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208657 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.208801 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:04.208975 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:04.208988 1157887 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:04.309828 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769804.283357411
	
	I0318 13:50:04.309861 1157887 fix.go:216] guest clock: 1710769804.283357411
	I0318 13:50:04.309871 1157887 fix.go:229] Guest: 2024-03-18 13:50:04.283357411 +0000 UTC Remote: 2024-03-18 13:50:04.204818975 +0000 UTC m=+262.583280441 (delta=78.538436ms)
	I0318 13:50:04.309898 1157887 fix.go:200] guest clock delta is within tolerance: 78.538436ms
	I0318 13:50:04.309904 1157887 start.go:83] releasing machines lock for "default-k8s-diff-port-569210", held for 20.660081187s
	I0318 13:50:04.309933 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.310247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:04.313302 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313747 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.313777 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313956 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314591 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314792 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314878 1157887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:04.314934 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.315014 1157887 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:04.315059 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.318021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318056 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318438 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318474 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318500 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318518 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318879 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.318962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.319052 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319110 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319229 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.319286 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.426710 1157887 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:04.433482 1157887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:04.590331 1157887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:04.598896 1157887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:04.598974 1157887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:04.617060 1157887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:04.617095 1157887 start.go:494] detecting cgroup driver to use...
	I0318 13:50:04.617190 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:04.633902 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:04.648705 1157887 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:04.648759 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:04.665516 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:04.681326 1157887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:04.798310 1157887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:04.972066 1157887 docker.go:233] disabling docker service ...
	I0318 13:50:04.972133 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:04.995498 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:05.014901 1157887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:05.158158 1157887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:05.309791 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:05.324965 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:05.346489 1157887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:05.346595 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.358753 1157887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:05.358823 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.374416 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.394228 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.406975 1157887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:05.420201 1157887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:05.432405 1157887 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:05.432479 1157887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:05.449386 1157887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:05.461081 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:05.607102 1157887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:05.776152 1157887 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:05.776267 1157887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:05.782168 1157887 start.go:562] Will wait 60s for crictl version
	I0318 13:50:05.782247 1157887 ssh_runner.go:195] Run: which crictl
	I0318 13:50:05.787932 1157887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:05.831304 1157887 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:05.831399 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.865410 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.908406 1157887 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:05.909651 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:05.912855 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913213 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:05.913256 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913470 1157887 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:05.918362 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:05.933755 1157887 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:05.933926 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:05.934002 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:05.978920 1157887 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:05.978998 1157887 ssh_runner.go:195] Run: which lz4
	I0318 13:50:05.983751 1157887 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:05.988862 1157887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:05.988895 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:03.363591 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:03.864049 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.363310 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.863306 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.363706 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.863618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.364183 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.863776 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.863261 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.337631 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Start
	I0318 13:50:04.337838 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring networks are active...
	I0318 13:50:04.338615 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network default is active
	I0318 13:50:04.338978 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network mk-embed-certs-173036 is active
	I0318 13:50:04.339444 1157263 main.go:141] libmachine: (embed-certs-173036) Getting domain xml...
	I0318 13:50:04.340295 1157263 main.go:141] libmachine: (embed-certs-173036) Creating domain...
	I0318 13:50:05.616437 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting to get IP...
	I0318 13:50:05.617646 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.618096 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.618168 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.618075 1158806 retry.go:31] will retry after 234.69885ms: waiting for machine to come up
	I0318 13:50:05.854749 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.855365 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.855401 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.855310 1158806 retry.go:31] will retry after 324.015594ms: waiting for machine to come up
	I0318 13:50:06.181178 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.182089 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.182123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.182038 1158806 retry.go:31] will retry after 456.172304ms: waiting for machine to come up
	I0318 13:50:06.639827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.640288 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.640349 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.640244 1158806 retry.go:31] will retry after 561.082549ms: waiting for machine to come up
	I0318 13:50:07.203208 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.203798 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.203825 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.203696 1158806 retry.go:31] will retry after 633.905437ms: waiting for machine to come up
	I0318 13:50:07.839205 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.839760 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.839792 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.839698 1158806 retry.go:31] will retry after 629.254629ms: waiting for machine to come up
	I0318 13:50:08.470625 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:08.471073 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:08.471105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:08.471021 1158806 retry.go:31] will retry after 771.526268ms: waiting for machine to come up
	I0318 13:50:06.709604 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:09.208197 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:08.056220 1157887 crio.go:444] duration metric: took 2.072501191s to copy over tarball
	I0318 13:50:08.056361 1157887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:10.763501 1157887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.707101479s)
	I0318 13:50:10.763560 1157887 crio.go:451] duration metric: took 2.707303654s to extract the tarball
	I0318 13:50:10.763570 1157887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:10.808643 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:10.860178 1157887 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:10.860218 1157887 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:10.860229 1157887 kubeadm.go:928] updating node { 192.168.61.3 8444 v1.28.4 crio true true} ...
	I0318 13:50:10.860381 1157887 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-569210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:10.860455 1157887 ssh_runner.go:195] Run: crio config
	I0318 13:50:10.918077 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:10.918109 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:10.918124 1157887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:10.918154 1157887 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-569210 NodeName:default-k8s-diff-port-569210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:10.918362 1157887 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-569210"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:10.918457 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:10.930573 1157887 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:10.930639 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:10.941181 1157887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0318 13:50:10.960048 1157887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:10.980367 1157887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0318 13:50:11.001607 1157887 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:11.006363 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:11.020871 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:11.164152 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:11.185025 1157887 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210 for IP: 192.168.61.3
	I0318 13:50:11.185060 1157887 certs.go:194] generating shared ca certs ...
	I0318 13:50:11.185096 1157887 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:11.185277 1157887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:11.185342 1157887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:11.185356 1157887 certs.go:256] generating profile certs ...
	I0318 13:50:11.185464 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/client.key
	I0318 13:50:11.185541 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key.e15332a5
	I0318 13:50:11.185590 1157887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key
	I0318 13:50:11.185757 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:11.185799 1157887 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:11.185812 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:11.185841 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:11.185899 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:11.185945 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:11.185999 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:11.186853 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:11.221967 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:11.250180 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:11.287449 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:11.323521 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 13:50:11.360286 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:11.396947 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:11.426116 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:50:11.455183 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:11.483479 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:11.512975 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:11.548393 1157887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:11.569155 1157887 ssh_runner.go:195] Run: openssl version
	I0318 13:50:11.576084 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:11.589110 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594640 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594736 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.601473 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:11.615874 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:11.630380 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635808 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635886 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.644465 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:11.661509 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:08.364243 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:08.863539 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.364037 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.863422 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.363353 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.863485 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.363548 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.864070 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.243731 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:09.244146 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:09.244180 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:09.244104 1158806 retry.go:31] will retry after 1.160252016s: waiting for machine to come up
	I0318 13:50:10.405805 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:10.406270 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:10.406310 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:10.406201 1158806 retry.go:31] will retry after 1.625913099s: waiting for machine to come up
	I0318 13:50:12.033202 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:12.033674 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:12.033712 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:12.033589 1158806 retry.go:31] will retry after 1.835793865s: waiting for machine to come up
	I0318 13:50:11.211241 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:13.710211 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:11.675340 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938009 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938089 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.944766 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:11.957959 1157887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:11.963524 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:11.971678 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:11.978601 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:11.985403 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:11.992159 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:11.998620 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:12.005209 1157887 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:12.005300 1157887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:12.005350 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.074518 1157887 cri.go:89] found id: ""
	I0318 13:50:12.074603 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:12.099031 1157887 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:12.099062 1157887 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:12.099070 1157887 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:12.099147 1157887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:12.111133 1157887 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:12.112779 1157887 kubeconfig.go:125] found "default-k8s-diff-port-569210" server: "https://192.168.61.3:8444"
	I0318 13:50:12.116521 1157887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:12.134902 1157887 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.3
	I0318 13:50:12.134964 1157887 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:12.135005 1157887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:12.135086 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.190100 1157887 cri.go:89] found id: ""
	I0318 13:50:12.190182 1157887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:12.211556 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:12.223095 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:12.223120 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:12.223173 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:50:12.235709 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:12.235780 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:12.248896 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:50:12.260212 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:12.260285 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:12.271424 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.283083 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:12.283143 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.294877 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:50:12.305629 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:12.305692 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:12.317395 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:12.328943 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:12.471901 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.400723 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.601149 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.677768 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.796413 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:13.796558 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.297639 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.797236 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.885767 1157887 api_server.go:72] duration metric: took 1.089353166s to wait for apiserver process to appear ...
	I0318 13:50:14.885801 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:14.885827 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:14.886464 1157887 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0318 13:50:15.386913 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:13.364111 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.863871 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.363958 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.863570 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.364185 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.863974 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.364010 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.863484 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.864149 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.871003 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:13.871443 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:13.871475 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:13.871398 1158806 retry.go:31] will retry after 2.53403994s: waiting for machine to come up
	I0318 13:50:16.407271 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:16.407728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:16.407775 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:16.407708 1158806 retry.go:31] will retry after 2.371916928s: waiting for machine to come up
	I0318 13:50:18.781468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:18.781866 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:18.781898 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:18.781809 1158806 retry.go:31] will retry after 3.250042198s: waiting for machine to come up
	I0318 13:50:17.204788 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.204828 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.204848 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.235957 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.236000 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.386349 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.393185 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.393220 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:17.886583 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.892087 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.892122 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.386820 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.406609 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:18.406658 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.886458 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.896097 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:50:18.905565 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:18.905603 1157887 api_server.go:131] duration metric: took 4.019792975s to wait for apiserver health ...
	I0318 13:50:18.905615 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:18.905624 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:18.907258 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:15.711910 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.209648 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.909133 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:18.944457 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:18.973831 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:18.984400 1157887 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:18.984436 1157887 system_pods.go:61] "coredns-5dd5756b68-hwsz5" [0a91f20c-3d3b-415c-b709-7898c606d830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:18.984447 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [64925324-9666-49ab-b849-ad9b7ce54891] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:18.984456 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [8409a63f-fbac-4bf9-b54b-5ac267a58206] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:18.984465 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [a2d7b983-c4aa-4c32-9391-babe90b0f102] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:18.984470 1157887 system_pods.go:61] "kube-proxy-v59ks" [39a4e73c-319d-4093-8781-ca7a1a48e005] Running
	I0318 13:50:18.984477 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [f24baa89-e33d-42ca-8f83-17c76a4cedcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:18.984488 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-2sb4m" [f3e533a7-9666-4b79-b9a9-26222422f242] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:18.984496 1157887 system_pods.go:61] "storage-provisioner" [864d0bb2-cbca-41ae-b9ec-89aced62dd08] Running
	I0318 13:50:18.984505 1157887 system_pods.go:74] duration metric: took 10.646849ms to wait for pod list to return data ...
	I0318 13:50:18.984519 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:18.989173 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:18.989201 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:18.989213 1157887 node_conditions.go:105] duration metric: took 4.685756ms to run NodePressure ...
	I0318 13:50:18.989231 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:19.229166 1157887 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237757 1157887 kubeadm.go:733] kubelet initialised
	I0318 13:50:19.237787 1157887 kubeadm.go:734] duration metric: took 8.591388ms waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237797 1157887 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:19.243530 1157887 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.253925 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253957 1157887 pod_ready.go:81] duration metric: took 10.403116ms for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.253969 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253978 1157887 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.265167 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265189 1157887 pod_ready.go:81] duration metric: took 11.202545ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.265200 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265206 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.273558 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273589 1157887 pod_ready.go:81] duration metric: took 8.37478ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.273603 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273615 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:21.280970 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.363366 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:18.863782 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.363987 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.863437 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.364050 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.863961 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.364126 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.863264 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.363519 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.033540 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:22.034056 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:22.034084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:22.034001 1158806 retry.go:31] will retry after 5.297432528s: waiting for machine to come up
	I0318 13:50:20.708189 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:22.708573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:24.708632 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.281625 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:25.780754 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.364019 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:23.864134 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.363510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.863263 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.364027 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.863203 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.364219 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.863262 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.363889 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.864113 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.335390 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335875 1157263 main.go:141] libmachine: (embed-certs-173036) Found IP for machine: 192.168.50.191
	I0318 13:50:27.335908 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has current primary IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335918 1157263 main.go:141] libmachine: (embed-certs-173036) Reserving static IP address...
	I0318 13:50:27.336311 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.336360 1157263 main.go:141] libmachine: (embed-certs-173036) Reserved static IP address: 192.168.50.191
	I0318 13:50:27.336380 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | skip adding static IP to network mk-embed-certs-173036 - found existing host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"}
	I0318 13:50:27.336394 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Getting to WaitForSSH function...
	I0318 13:50:27.336406 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting for SSH to be available...
	I0318 13:50:27.338627 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.338948 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.338983 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.339087 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH client type: external
	I0318 13:50:27.339177 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa (-rw-------)
	I0318 13:50:27.339212 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:27.339227 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | About to run SSH command:
	I0318 13:50:27.339244 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | exit 0
	I0318 13:50:27.468468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:27.468936 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetConfigRaw
	I0318 13:50:27.469699 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.472098 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472422 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.472446 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472714 1157263 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/config.json ...
	I0318 13:50:27.472955 1157263 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:27.472982 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:27.473196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.475516 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.475808 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.475831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.476041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.476252 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476414 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476537 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.476719 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.476899 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.476909 1157263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:27.589501 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:27.589532 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.589828 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:50:27.589862 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.590068 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.592650 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593005 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.593035 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593186 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.593375 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593546 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593713 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.593883 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.594058 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.594073 1157263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-173036 && echo "embed-certs-173036" | sudo tee /etc/hostname
	I0318 13:50:27.730406 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173036
	
	I0318 13:50:27.730437 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.733420 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.733857 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.733890 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.734058 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.734271 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734475 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734609 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.734764 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.734943 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.734960 1157263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-173036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-173036/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-173036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:27.860625 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:27.860679 1157263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:27.860777 1157263 buildroot.go:174] setting up certificates
	I0318 13:50:27.860790 1157263 provision.go:84] configureAuth start
	I0318 13:50:27.860810 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.861112 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.864215 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864667 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.864703 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864956 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.867381 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867690 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.867730 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867893 1157263 provision.go:143] copyHostCerts
	I0318 13:50:27.867963 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:27.867977 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:27.868048 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:27.868183 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:27.868198 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:27.868231 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:27.868307 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:27.868318 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:27.868372 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:27.868451 1157263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.embed-certs-173036 san=[127.0.0.1 192.168.50.191 embed-certs-173036 localhost minikube]
	I0318 13:50:28.001671 1157263 provision.go:177] copyRemoteCerts
	I0318 13:50:28.001742 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:28.001773 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.004389 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.004746 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.004777 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.005021 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.005214 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.005393 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.005558 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.095871 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 13:50:28.127356 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:28.157301 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:28.186185 1157263 provision.go:87] duration metric: took 325.374328ms to configureAuth
	I0318 13:50:28.186217 1157263 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:28.186424 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:28.186529 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.189135 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189532 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.189564 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189719 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.189933 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190127 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.190492 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.190654 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.190668 1157263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:28.473836 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:28.473875 1157263 machine.go:97] duration metric: took 1.000902962s to provisionDockerMachine
	I0318 13:50:28.473887 1157263 start.go:293] postStartSetup for "embed-certs-173036" (driver="kvm2")
	I0318 13:50:28.473898 1157263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:28.473914 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.474270 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:28.474307 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.477165 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477571 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.477619 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477756 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.477966 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.478135 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.478296 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.568988 1157263 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:28.573759 1157263 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:28.573782 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:28.573839 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:28.573909 1157263 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:28.573989 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:28.584049 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:28.610999 1157263 start.go:296] duration metric: took 137.09711ms for postStartSetup
	I0318 13:50:28.611043 1157263 fix.go:56] duration metric: took 24.300980779s for fixHost
	I0318 13:50:28.611066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.614123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614582 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.614628 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614795 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.614999 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615124 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615255 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.615427 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.615617 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.615631 1157263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:28.729856 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769828.678644307
	
	I0318 13:50:28.729894 1157263 fix.go:216] guest clock: 1710769828.678644307
	I0318 13:50:28.729913 1157263 fix.go:229] Guest: 2024-03-18 13:50:28.678644307 +0000 UTC Remote: 2024-03-18 13:50:28.611048079 +0000 UTC m=+364.845703282 (delta=67.596228ms)
	I0318 13:50:28.729932 1157263 fix.go:200] guest clock delta is within tolerance: 67.596228ms
	I0318 13:50:28.729937 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 24.419922158s
	I0318 13:50:28.729958 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.730241 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:28.732831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733196 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.733249 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733406 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.733875 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734172 1157263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:28.734248 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.734330 1157263 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:28.734376 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.737014 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737200 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737444 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737470 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737611 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737694 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737721 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737918 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737926 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738195 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738292 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.738357 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738466 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:26.708824 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.209974 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:28.818695 1157263 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:28.844173 1157263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:28.995017 1157263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:29.002150 1157263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:29.002251 1157263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:29.021165 1157263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:29.021200 1157263 start.go:494] detecting cgroup driver to use...
	I0318 13:50:29.021286 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:29.039060 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:29.053451 1157263 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:29.053521 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:29.069721 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:29.085285 1157263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:29.201273 1157263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:29.356314 1157263 docker.go:233] disabling docker service ...
	I0318 13:50:29.356406 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:29.374159 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:29.390280 1157263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:29.542126 1157263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:29.692068 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:29.707760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:29.735684 1157263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:29.735753 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.751291 1157263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:29.751365 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.763159 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.774837 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.787142 1157263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:29.799773 1157263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:29.810620 1157263 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:29.810691 1157263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:29.826816 1157263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:29.842059 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:29.985531 1157263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:30.147122 1157263 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:30.147191 1157263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:30.152406 1157263 start.go:562] Will wait 60s for crictl version
	I0318 13:50:30.152468 1157263 ssh_runner.go:195] Run: which crictl
	I0318 13:50:30.157019 1157263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:30.199810 1157263 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:30.199889 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.232028 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.270484 1157263 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:27.781584 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.795969 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:31.282868 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.282899 1157887 pod_ready.go:81] duration metric: took 12.009270978s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.282910 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290886 1157887 pod_ready.go:92] pod "kube-proxy-v59ks" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.290917 1157887 pod_ready.go:81] duration metric: took 7.99936ms for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290931 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300197 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.300235 1157887 pod_ready.go:81] duration metric: took 9.294232ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300254 1157887 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:28.364069 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:28.863405 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.363996 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.863574 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.363749 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.863564 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.363250 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.863320 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.363894 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.864166 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.271939 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:30.275084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.275682 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:30.275728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.276045 1157263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:30.282421 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:30.299013 1157263 kubeadm.go:877] updating cluster {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:30.299280 1157263 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:30.299364 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:30.349617 1157263 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:30.349720 1157263 ssh_runner.go:195] Run: which lz4
	I0318 13:50:30.354659 1157263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:30.359861 1157263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:30.359903 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:32.362707 1157263 crio.go:444] duration metric: took 2.008087158s to copy over tarball
	I0318 13:50:32.362796 1157263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:31.210766 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.709661 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.308081 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:35.309291 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:33.864021 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.363963 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.864011 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.364122 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.863559 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.364154 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.364232 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.863934 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.265803 1157263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.902966349s)
	I0318 13:50:35.265827 1157263 crio.go:451] duration metric: took 2.903086385s to extract the tarball
	I0318 13:50:35.265835 1157263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:35.313875 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:35.378361 1157263 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:35.378392 1157263 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:35.378408 1157263 kubeadm.go:928] updating node { 192.168.50.191 8443 v1.28.4 crio true true} ...
	I0318 13:50:35.378551 1157263 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-173036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:35.378648 1157263 ssh_runner.go:195] Run: crio config
	I0318 13:50:35.443472 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:35.443501 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:35.443520 1157263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:35.443551 1157263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.191 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-173036 NodeName:embed-certs-173036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:35.443730 1157263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-173036"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:35.443809 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:35.455284 1157263 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:35.455352 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:35.465886 1157263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 13:50:35.487345 1157263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:35.507361 1157263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 13:50:35.528055 1157263 ssh_runner.go:195] Run: grep 192.168.50.191	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:35.533287 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:35.548295 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:35.684165 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:35.703884 1157263 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036 for IP: 192.168.50.191
	I0318 13:50:35.703910 1157263 certs.go:194] generating shared ca certs ...
	I0318 13:50:35.703927 1157263 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:35.704117 1157263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:35.704186 1157263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:35.704200 1157263 certs.go:256] generating profile certs ...
	I0318 13:50:35.704292 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/client.key
	I0318 13:50:35.704406 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key.527b6b30
	I0318 13:50:35.704472 1157263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key
	I0318 13:50:35.704637 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:35.704680 1157263 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:35.704694 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:35.704729 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:35.704763 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:35.704796 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:35.704857 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:35.705836 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:35.768912 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:35.830564 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:35.877813 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:35.916756 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 13:50:35.948397 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:35.980450 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:36.009626 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:50:36.040155 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:36.068885 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:36.098638 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:36.128423 1157263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:36.149584 1157263 ssh_runner.go:195] Run: openssl version
	I0318 13:50:36.156347 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:36.169729 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175367 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175438 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.181995 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:36.193987 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:36.206444 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212355 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212442 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.219042 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:36.231882 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:36.244590 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250443 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250511 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.257713 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:36.271026 1157263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:36.276902 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:36.285465 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:36.294274 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:36.302415 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:36.310867 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:36.318931 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:36.327627 1157263 kubeadm.go:391] StartCluster: {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:36.327781 1157263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:36.327843 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.376644 1157263 cri.go:89] found id: ""
	I0318 13:50:36.376741 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:36.389506 1157263 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:36.389528 1157263 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:36.389533 1157263 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:36.389640 1157263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:36.401386 1157263 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:36.402631 1157263 kubeconfig.go:125] found "embed-certs-173036" server: "https://192.168.50.191:8443"
	I0318 13:50:36.404833 1157263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:36.416975 1157263 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.191
	I0318 13:50:36.417026 1157263 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:36.417041 1157263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:36.417106 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.458072 1157263 cri.go:89] found id: ""
	I0318 13:50:36.458162 1157263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:36.476557 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:36.487765 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:36.487791 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:36.487857 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:50:36.498903 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:36.498982 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:36.510205 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:50:36.520423 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:36.520476 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:36.531864 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.542058 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:36.542131 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.552807 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:50:36.562840 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:36.562915 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:36.573581 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:36.583760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:36.719884 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.681007 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.914386 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.993967 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:38.101144 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:38.101261 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.602138 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.711725 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:37.807508 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:39.809153 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.363994 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.863278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.363665 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.863948 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.364081 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.864124 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.363964 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.863593 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.363750 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.864002 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.102040 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.212769 1157263 api_server.go:72] duration metric: took 1.111626123s to wait for apiserver process to appear ...
	I0318 13:50:39.212807 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:39.212840 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:39.213446 1157263 api_server.go:269] stopped: https://192.168.50.191:8443/healthz: Get "https://192.168.50.191:8443/healthz": dial tcp 192.168.50.191:8443: connect: connection refused
	I0318 13:50:39.713482 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.646306 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.646352 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.646370 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.691920 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.691953 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.713082 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.770065 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:42.770101 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.213524 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.224669 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.224710 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.712987 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.718490 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.718533 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:44.213026 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:44.217876 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:50:44.225562 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:44.225588 1157263 api_server.go:131] duration metric: took 5.012774227s to wait for apiserver health ...
	I0318 13:50:44.225610 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:44.225618 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:44.227565 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:40.210029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:42.210435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:44.710674 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:41.811414 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.818645 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:46.308757 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.364189 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:43.863868 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.363454 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.863940 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.363913 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.863288 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.363884 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.863361 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.363383 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.864064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.229055 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:44.260389 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:44.310001 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:44.327281 1157263 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:44.327330 1157263 system_pods.go:61] "coredns-5dd5756b68-zsfvm" [1404c3fe-6538-4aaf-80f5-599275240731] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:44.327342 1157263 system_pods.go:61] "etcd-embed-certs-173036" [254a577c-bd3b-4645-9c92-1479b0c6d0c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:44.327354 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [5a738280-05ba-413e-a288-4c4d07ddbd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:44.327362 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [f48cfb7f-1efe-4941-b328-2358c7a5cced] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:44.327369 1157263 system_pods.go:61] "kube-proxy-xqf68" [969de4e5-fc60-4d46-b336-49f22a9b6c38] Running
	I0318 13:50:44.327376 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [e0579c16-de3e-4915-9ed2-f69b53f6f884] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:44.327385 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-5cv2z" [85649bfb-f91f-4bfe-9356-d540ac3d6a68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:44.327392 1157263 system_pods.go:61] "storage-provisioner" [0c1ec131-0f6c-4e01-aaec-5011f1a4fe75] Running
	I0318 13:50:44.327410 1157263 system_pods.go:74] duration metric: took 17.376754ms to wait for pod list to return data ...
	I0318 13:50:44.327423 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:44.332965 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:44.332997 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:44.333008 1157263 node_conditions.go:105] duration metric: took 5.580934ms to run NodePressure ...
	I0318 13:50:44.333027 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:44.573923 1157263 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578504 1157263 kubeadm.go:733] kubelet initialised
	I0318 13:50:44.578526 1157263 kubeadm.go:734] duration metric: took 4.577181ms waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578534 1157263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:44.584361 1157263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.591714 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591739 1157263 pod_ready.go:81] duration metric: took 7.35191ms for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.591746 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591753 1157263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.597618 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597641 1157263 pod_ready.go:81] duration metric: took 5.880276ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.597649 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597655 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.604124 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604148 1157263 pod_ready.go:81] duration metric: took 6.484251ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.604157 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604164 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:46.611326 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:47.209538 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:49.708718 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.309157 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.808340 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.363218 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:48.864086 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.363457 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.863292 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.363308 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.863428 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.363583 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.863562 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.363995 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.863463 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.111834 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.114329 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.114356 1157263 pod_ready.go:81] duration metric: took 5.510175425s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.114369 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133169 1157263 pod_ready.go:92] pod "kube-proxy-xqf68" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.133196 1157263 pod_ready.go:81] duration metric: took 18.819059ms for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133208 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:52.144639 1157263 pod_ready.go:102] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:51.709823 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:54.207738 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.311033 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:55.311439 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.363919 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:53.863936 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.363671 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.863567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:54.863709 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:54.911905 1157708 cri.go:89] found id: ""
	I0318 13:50:54.911942 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.911954 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:54.911962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:54.912031 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:54.962141 1157708 cri.go:89] found id: ""
	I0318 13:50:54.962170 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.962182 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:54.962188 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:54.962269 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:55.001597 1157708 cri.go:89] found id: ""
	I0318 13:50:55.001639 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.001652 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:55.001660 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:55.001725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:55.042660 1157708 cri.go:89] found id: ""
	I0318 13:50:55.042695 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.042708 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:55.042716 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:55.042775 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:55.082095 1157708 cri.go:89] found id: ""
	I0318 13:50:55.082128 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.082139 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:55.082146 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:55.082211 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:55.120938 1157708 cri.go:89] found id: ""
	I0318 13:50:55.120969 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.121000 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:55.121008 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:55.121081 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:55.159247 1157708 cri.go:89] found id: ""
	I0318 13:50:55.159280 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.159292 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:55.159300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:55.159366 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:55.200130 1157708 cri.go:89] found id: ""
	I0318 13:50:55.200161 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.200170 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:55.200180 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:55.200193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:55.254113 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:55.254154 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:55.268984 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:55.269027 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:55.402079 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:55.402106 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:55.402123 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:55.468627 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:55.468674 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:54.143220 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:54.143247 1157263 pod_ready.go:81] duration metric: took 4.010031997s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:54.143258 1157263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:56.151615 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.650293 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:56.208339 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.209144 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:57.810894 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.308972 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.016860 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:58.031684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:58.031747 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:58.073389 1157708 cri.go:89] found id: ""
	I0318 13:50:58.073415 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.073427 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:58.073434 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:58.073497 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:58.114439 1157708 cri.go:89] found id: ""
	I0318 13:50:58.114471 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.114483 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:58.114490 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:58.114553 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:58.165440 1157708 cri.go:89] found id: ""
	I0318 13:50:58.165466 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.165476 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:58.165484 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:58.165569 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:58.207083 1157708 cri.go:89] found id: ""
	I0318 13:50:58.207117 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.207129 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:58.207137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:58.207227 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:58.252945 1157708 cri.go:89] found id: ""
	I0318 13:50:58.252973 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.252985 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:58.252993 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:58.253055 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:58.292437 1157708 cri.go:89] found id: ""
	I0318 13:50:58.292464 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.292474 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:58.292480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:58.292530 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:58.335359 1157708 cri.go:89] found id: ""
	I0318 13:50:58.335403 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.335415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:58.335423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:58.335511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:58.381434 1157708 cri.go:89] found id: ""
	I0318 13:50:58.381473 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.381484 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:58.381494 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:58.381511 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:58.432270 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:58.432319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:58.447658 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:58.447686 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:58.523163 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:58.523186 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:58.523207 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:58.599544 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:58.599586 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:01.141653 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:01.156996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:01.157070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:01.192720 1157708 cri.go:89] found id: ""
	I0318 13:51:01.192762 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.192775 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:01.192785 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:01.192866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:01.232678 1157708 cri.go:89] found id: ""
	I0318 13:51:01.232705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.232716 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:01.232723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:01.232795 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:01.270637 1157708 cri.go:89] found id: ""
	I0318 13:51:01.270666 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.270676 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:01.270684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:01.270746 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:01.308891 1157708 cri.go:89] found id: ""
	I0318 13:51:01.308921 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.308931 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:01.308939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:01.309003 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:01.349301 1157708 cri.go:89] found id: ""
	I0318 13:51:01.349334 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.349346 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:01.349354 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:01.349420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:01.394010 1157708 cri.go:89] found id: ""
	I0318 13:51:01.394039 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.394047 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:01.394053 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:01.394103 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:01.432778 1157708 cri.go:89] found id: ""
	I0318 13:51:01.432804 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.432815 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:01.432823 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:01.432886 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:01.471974 1157708 cri.go:89] found id: ""
	I0318 13:51:01.472002 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.472011 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:01.472022 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:01.472040 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:01.524855 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:01.524893 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:01.540939 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:01.540967 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:01.618318 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:01.618350 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:01.618367 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:01.695717 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:01.695755 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:00.650906 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.651512 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.211620 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.708336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.312320 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.808301 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.241781 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:04.256276 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:04.256373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:04.297129 1157708 cri.go:89] found id: ""
	I0318 13:51:04.297158 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.297170 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:04.297179 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:04.297247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:04.341743 1157708 cri.go:89] found id: ""
	I0318 13:51:04.341774 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.341786 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:04.341793 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:04.341858 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:04.384400 1157708 cri.go:89] found id: ""
	I0318 13:51:04.384434 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.384445 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:04.384453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:04.384510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:04.425459 1157708 cri.go:89] found id: ""
	I0318 13:51:04.425487 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.425500 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:04.425510 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:04.425563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:04.463091 1157708 cri.go:89] found id: ""
	I0318 13:51:04.463125 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.463137 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:04.463145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:04.463210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:04.503023 1157708 cri.go:89] found id: ""
	I0318 13:51:04.503057 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.503069 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:04.503077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:04.503141 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:04.542083 1157708 cri.go:89] found id: ""
	I0318 13:51:04.542116 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.542127 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:04.542136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:04.542207 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:04.583097 1157708 cri.go:89] found id: ""
	I0318 13:51:04.583128 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.583137 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:04.583146 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:04.583161 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:04.650476 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:04.650518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:04.706073 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:04.706111 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:04.723595 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:04.723628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:04.800278 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:04.800301 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:04.800316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:07.388144 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:07.403636 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:07.403711 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:07.443337 1157708 cri.go:89] found id: ""
	I0318 13:51:07.443365 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.443379 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:07.443386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:07.443442 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:07.482417 1157708 cri.go:89] found id: ""
	I0318 13:51:07.482453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.482462 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:07.482469 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:07.482521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:07.518445 1157708 cri.go:89] found id: ""
	I0318 13:51:07.518474 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.518485 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:07.518493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:07.518563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:07.555628 1157708 cri.go:89] found id: ""
	I0318 13:51:07.555661 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.555673 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:07.555681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:07.555760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:07.593805 1157708 cri.go:89] found id: ""
	I0318 13:51:07.593842 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.593856 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:07.593873 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:07.593936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:07.638206 1157708 cri.go:89] found id: ""
	I0318 13:51:07.638234 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.638242 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:07.638249 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:07.638313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:07.679526 1157708 cri.go:89] found id: ""
	I0318 13:51:07.679561 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.679573 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:07.679581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:07.679635 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:07.724468 1157708 cri.go:89] found id: ""
	I0318 13:51:07.724494 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.724504 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:07.724516 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:07.724533 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:07.766491 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:07.766522 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:07.823782 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:07.823833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:07.839316 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:07.839342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:07.924790 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:07.924821 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:07.924841 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:05.151629 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.651485 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:05.210455 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.709381 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.310000 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:09.808337 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.513618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:10.528711 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:10.528790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:10.571217 1157708 cri.go:89] found id: ""
	I0318 13:51:10.571254 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.571267 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:10.571275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:10.571335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:10.608096 1157708 cri.go:89] found id: ""
	I0318 13:51:10.608129 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.608140 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:10.608149 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:10.608217 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:10.649245 1157708 cri.go:89] found id: ""
	I0318 13:51:10.649274 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.649283 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:10.649290 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:10.649365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:10.693462 1157708 cri.go:89] found id: ""
	I0318 13:51:10.693495 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.693506 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:10.693515 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:10.693589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:10.740434 1157708 cri.go:89] found id: ""
	I0318 13:51:10.740464 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.740474 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:10.740480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:10.740543 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:10.781062 1157708 cri.go:89] found id: ""
	I0318 13:51:10.781099 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.781108 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:10.781114 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:10.781167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:10.828480 1157708 cri.go:89] found id: ""
	I0318 13:51:10.828513 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.828524 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:10.828532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:10.828605 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:10.868508 1157708 cri.go:89] found id: ""
	I0318 13:51:10.868535 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.868543 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:10.868553 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:10.868565 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:10.923925 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:10.923961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:10.939254 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:10.939283 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:11.031307 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:11.031334 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:11.031351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:11.121563 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:11.121618 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:10.151278 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.650083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.209877 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.709070 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.308084 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:14.309651 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:16.312985 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:13.681147 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:13.696705 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:13.696812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:13.740904 1157708 cri.go:89] found id: ""
	I0318 13:51:13.740937 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.740949 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:13.740957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:13.741038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:13.779625 1157708 cri.go:89] found id: ""
	I0318 13:51:13.779659 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.779672 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:13.779681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:13.779762 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:13.822183 1157708 cri.go:89] found id: ""
	I0318 13:51:13.822218 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.822231 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:13.822239 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:13.822302 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:13.873686 1157708 cri.go:89] found id: ""
	I0318 13:51:13.873728 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.873741 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:13.873749 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:13.873821 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:13.919772 1157708 cri.go:89] found id: ""
	I0318 13:51:13.919802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.919811 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:13.919817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:13.919874 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:13.958809 1157708 cri.go:89] found id: ""
	I0318 13:51:13.958837 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.958846 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:13.958852 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:13.958928 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:14.000537 1157708 cri.go:89] found id: ""
	I0318 13:51:14.000568 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.000580 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:14.000588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:14.000638 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:14.041234 1157708 cri.go:89] found id: ""
	I0318 13:51:14.041265 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.041275 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:14.041285 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:14.041299 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:14.085435 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:14.085462 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:14.144336 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:14.144374 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:14.159972 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:14.160000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:14.242027 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:14.242048 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:14.242061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:16.821805 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:16.840202 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:16.840272 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:16.898088 1157708 cri.go:89] found id: ""
	I0318 13:51:16.898120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.898129 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:16.898135 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:16.898203 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:16.953180 1157708 cri.go:89] found id: ""
	I0318 13:51:16.953209 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.953221 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:16.953229 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:16.953288 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:17.006995 1157708 cri.go:89] found id: ""
	I0318 13:51:17.007048 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.007062 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:17.007070 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:17.007136 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:17.049756 1157708 cri.go:89] found id: ""
	I0318 13:51:17.049798 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.049809 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:17.049817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:17.049885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:17.092026 1157708 cri.go:89] found id: ""
	I0318 13:51:17.092055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.092066 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:17.092074 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:17.092144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:17.137722 1157708 cri.go:89] found id: ""
	I0318 13:51:17.137756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.137769 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:17.137778 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:17.137875 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:17.180778 1157708 cri.go:89] found id: ""
	I0318 13:51:17.180808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.180816 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:17.180822 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:17.180885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:17.227629 1157708 cri.go:89] found id: ""
	I0318 13:51:17.227664 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.227675 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:17.227688 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:17.227706 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:17.272559 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:17.272588 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:17.333953 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:17.333994 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:17.349765 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:17.349793 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:17.434436 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:17.434465 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:17.434483 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:14.650201 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.151069 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:15.208570 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.210168 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:19.707753 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:18.808252 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.309389 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:20.014314 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:20.031106 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:20.031172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:20.067727 1157708 cri.go:89] found id: ""
	I0318 13:51:20.067753 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.067765 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:20.067773 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:20.067844 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:20.108455 1157708 cri.go:89] found id: ""
	I0318 13:51:20.108482 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.108491 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:20.108497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:20.108563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:20.152257 1157708 cri.go:89] found id: ""
	I0318 13:51:20.152285 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.152310 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:20.152317 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:20.152394 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:20.191480 1157708 cri.go:89] found id: ""
	I0318 13:51:20.191509 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.191520 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:20.191529 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:20.191599 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:20.235677 1157708 cri.go:89] found id: ""
	I0318 13:51:20.235705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.235716 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:20.235723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:20.235796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:20.274794 1157708 cri.go:89] found id: ""
	I0318 13:51:20.274822 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.274833 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:20.274842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:20.274907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:20.321987 1157708 cri.go:89] found id: ""
	I0318 13:51:20.322019 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.322031 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:20.322040 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:20.322097 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:20.361292 1157708 cri.go:89] found id: ""
	I0318 13:51:20.361319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.361328 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:20.361338 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:20.361360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:20.434481 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:20.434509 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:20.434527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:20.518203 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:20.518244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:20.560241 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:20.560271 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:20.615489 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:20.615526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:19.151244 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.151320 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.651849 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.708423 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:24.207976 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.310491 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:25.808443 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.132509 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:23.146447 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:23.146559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:23.189576 1157708 cri.go:89] found id: ""
	I0318 13:51:23.189613 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.189625 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:23.189634 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:23.189688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:23.229700 1157708 cri.go:89] found id: ""
	I0318 13:51:23.229731 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.229740 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:23.229747 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:23.229812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:23.272713 1157708 cri.go:89] found id: ""
	I0318 13:51:23.272747 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.272759 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:23.272768 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:23.272834 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:23.313988 1157708 cri.go:89] found id: ""
	I0318 13:51:23.314014 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.314022 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:23.314028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:23.314087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:23.360195 1157708 cri.go:89] found id: ""
	I0318 13:51:23.360230 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.360243 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:23.360251 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:23.360321 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:23.400657 1157708 cri.go:89] found id: ""
	I0318 13:51:23.400685 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.400694 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:23.400707 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:23.400760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:23.442841 1157708 cri.go:89] found id: ""
	I0318 13:51:23.442873 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.442893 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:23.442900 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:23.442970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:23.483467 1157708 cri.go:89] found id: ""
	I0318 13:51:23.483504 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.483516 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:23.483528 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:23.483545 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:23.538581 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:23.538616 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:23.555392 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:23.555421 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:23.634919 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:23.634945 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:23.634970 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:23.718098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:23.718144 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.270369 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:26.287165 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:26.287232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:26.331773 1157708 cri.go:89] found id: ""
	I0318 13:51:26.331807 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.331832 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:26.331850 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:26.331923 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:26.372067 1157708 cri.go:89] found id: ""
	I0318 13:51:26.372095 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.372102 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:26.372109 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:26.372182 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:26.411883 1157708 cri.go:89] found id: ""
	I0318 13:51:26.411910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.411919 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:26.411924 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:26.411980 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:26.449087 1157708 cri.go:89] found id: ""
	I0318 13:51:26.449122 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.449131 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:26.449137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:26.449188 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:26.492126 1157708 cri.go:89] found id: ""
	I0318 13:51:26.492162 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.492174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:26.492182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:26.492251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:26.529621 1157708 cri.go:89] found id: ""
	I0318 13:51:26.529656 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.529668 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:26.529677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:26.529764 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:26.568853 1157708 cri.go:89] found id: ""
	I0318 13:51:26.568888 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.568899 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:26.568907 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:26.568979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:26.607882 1157708 cri.go:89] found id: ""
	I0318 13:51:26.607917 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.607929 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:26.607942 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:26.607959 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.648736 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:26.648768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:26.704641 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:26.704684 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:26.720681 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:26.720715 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:26.799577 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:26.799608 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:26.799627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:26.152083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.651445 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:26.208160 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.708468 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.309859 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.806690 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:29.389391 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:29.404122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:29.404195 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:29.446761 1157708 cri.go:89] found id: ""
	I0318 13:51:29.446787 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.446796 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:29.446803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:29.446857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:29.483974 1157708 cri.go:89] found id: ""
	I0318 13:51:29.484007 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.484020 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:29.484028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:29.484099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:29.521894 1157708 cri.go:89] found id: ""
	I0318 13:51:29.521922 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.521931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:29.521937 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:29.521993 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:29.562918 1157708 cri.go:89] found id: ""
	I0318 13:51:29.562948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.562957 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:29.562963 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:29.563017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:29.600372 1157708 cri.go:89] found id: ""
	I0318 13:51:29.600412 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.600424 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:29.600432 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:29.600500 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:29.638902 1157708 cri.go:89] found id: ""
	I0318 13:51:29.638933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.638945 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:29.638953 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:29.639019 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:29.679041 1157708 cri.go:89] found id: ""
	I0318 13:51:29.679071 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.679079 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:29.679085 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:29.679142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:29.719168 1157708 cri.go:89] found id: ""
	I0318 13:51:29.719201 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.719213 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:29.719224 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:29.719244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:29.764050 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:29.764077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:29.822136 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:29.822174 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:29.839485 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:29.839515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:29.914984 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:29.915006 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:29.915023 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:32.497388 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:32.512151 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:32.512215 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:32.549566 1157708 cri.go:89] found id: ""
	I0318 13:51:32.549602 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.549614 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:32.549623 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:32.549693 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:32.588516 1157708 cri.go:89] found id: ""
	I0318 13:51:32.588546 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.588555 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:32.588562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:32.588615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:32.628425 1157708 cri.go:89] found id: ""
	I0318 13:51:32.628453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.628462 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:32.628470 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:32.628546 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:32.670851 1157708 cri.go:89] found id: ""
	I0318 13:51:32.670874 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.670888 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:32.670895 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:32.670944 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:32.709614 1157708 cri.go:89] found id: ""
	I0318 13:51:32.709642 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.709656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:32.709666 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:32.709738 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:32.749774 1157708 cri.go:89] found id: ""
	I0318 13:51:32.749808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.749819 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:32.749828 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:32.749896 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:32.789502 1157708 cri.go:89] found id: ""
	I0318 13:51:32.789525 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.789534 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:32.789540 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:32.789589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:32.834926 1157708 cri.go:89] found id: ""
	I0318 13:51:32.834948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.834956 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:32.834965 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:32.834980 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:32.887365 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:32.887404 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:32.903584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:32.903610 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:32.978924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:32.978958 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:32.978988 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:31.151276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.651395 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.709136 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.709549 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.808076 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.308827 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.055386 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:33.055424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:35.603881 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:35.618083 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:35.618167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:35.659760 1157708 cri.go:89] found id: ""
	I0318 13:51:35.659802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.659814 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:35.659820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:35.659881 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:35.703521 1157708 cri.go:89] found id: ""
	I0318 13:51:35.703570 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.703582 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:35.703589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:35.703651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:35.744411 1157708 cri.go:89] found id: ""
	I0318 13:51:35.744444 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.744455 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:35.744463 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:35.744548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:35.783704 1157708 cri.go:89] found id: ""
	I0318 13:51:35.783735 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.783746 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:35.783754 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:35.783819 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:35.824000 1157708 cri.go:89] found id: ""
	I0318 13:51:35.824031 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.824042 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:35.824049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:35.824117 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:35.860260 1157708 cri.go:89] found id: ""
	I0318 13:51:35.860289 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.860299 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:35.860308 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:35.860388 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:35.895154 1157708 cri.go:89] found id: ""
	I0318 13:51:35.895189 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.895201 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:35.895209 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:35.895276 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:35.936916 1157708 cri.go:89] found id: ""
	I0318 13:51:35.936942 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.936951 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:35.936961 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:35.936977 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:35.951715 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:35.951745 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:36.027431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:36.027457 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:36.027474 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:36.113339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:36.113386 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:36.160132 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:36.160170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:36.151331 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.650891 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.208500 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.209692 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.709776 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.807423 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.809226 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.711710 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:38.726104 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:38.726162 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:38.763251 1157708 cri.go:89] found id: ""
	I0318 13:51:38.763281 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.763291 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:38.763300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:38.763364 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:38.802521 1157708 cri.go:89] found id: ""
	I0318 13:51:38.802548 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.802556 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:38.802562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:38.802616 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:38.843778 1157708 cri.go:89] found id: ""
	I0318 13:51:38.843817 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.843831 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:38.843839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:38.843909 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:38.884966 1157708 cri.go:89] found id: ""
	I0318 13:51:38.885003 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.885015 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:38.885024 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:38.885090 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:38.925653 1157708 cri.go:89] found id: ""
	I0318 13:51:38.925681 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.925690 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:38.925696 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:38.925757 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:38.964126 1157708 cri.go:89] found id: ""
	I0318 13:51:38.964156 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.964169 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:38.964177 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:38.964228 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:39.004864 1157708 cri.go:89] found id: ""
	I0318 13:51:39.004898 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.004910 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:39.004919 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:39.004991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:39.041555 1157708 cri.go:89] found id: ""
	I0318 13:51:39.041588 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.041600 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:39.041611 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:39.041626 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:39.092984 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:39.093019 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:39.110492 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:39.110526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:39.186785 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:39.186848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:39.186872 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:39.272847 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:39.272891 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:41.829404 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:41.843407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:41.843479 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:41.883129 1157708 cri.go:89] found id: ""
	I0318 13:51:41.883164 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.883175 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:41.883184 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:41.883246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:41.924083 1157708 cri.go:89] found id: ""
	I0318 13:51:41.924123 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.924136 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:41.924144 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:41.924209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:41.963029 1157708 cri.go:89] found id: ""
	I0318 13:51:41.963058 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.963069 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:41.963084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:41.963155 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:42.003393 1157708 cri.go:89] found id: ""
	I0318 13:51:42.003430 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.003442 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:42.003450 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:42.003511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:42.041938 1157708 cri.go:89] found id: ""
	I0318 13:51:42.041968 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.041977 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:42.041983 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:42.042044 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:42.079685 1157708 cri.go:89] found id: ""
	I0318 13:51:42.079718 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.079731 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:42.079740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:42.079805 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:42.118112 1157708 cri.go:89] found id: ""
	I0318 13:51:42.118144 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.118156 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:42.118164 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:42.118230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:42.157287 1157708 cri.go:89] found id: ""
	I0318 13:51:42.157319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.157331 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:42.157343 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:42.157360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:42.213006 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:42.213038 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:42.228452 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:42.228481 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:42.302523 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:42.302545 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:42.302558 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:42.387994 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:42.388062 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:40.651272 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:43.151009 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.208825 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.211676 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.310765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.313778 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.934501 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:44.949163 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:44.949245 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:44.991885 1157708 cri.go:89] found id: ""
	I0318 13:51:44.991914 1157708 logs.go:276] 0 containers: []
	W0318 13:51:44.991924 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:44.991931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:44.992008 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:45.029868 1157708 cri.go:89] found id: ""
	I0318 13:51:45.029904 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.029915 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:45.029922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:45.030017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:45.067755 1157708 cri.go:89] found id: ""
	I0318 13:51:45.067785 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.067794 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:45.067803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:45.067857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:45.106296 1157708 cri.go:89] found id: ""
	I0318 13:51:45.106323 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.106333 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:45.106339 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:45.106405 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:45.145746 1157708 cri.go:89] found id: ""
	I0318 13:51:45.145784 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.145797 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:45.145805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:45.145868 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:45.191960 1157708 cri.go:89] found id: ""
	I0318 13:51:45.191998 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.192010 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:45.192019 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:45.192089 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:45.231436 1157708 cri.go:89] found id: ""
	I0318 13:51:45.231470 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.231483 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:45.231491 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:45.231559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:45.274521 1157708 cri.go:89] found id: ""
	I0318 13:51:45.274554 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.274565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:45.274577 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:45.274595 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:45.338539 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:45.338580 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:45.353917 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:45.353947 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:45.447734 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:45.447755 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:45.447768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:45.530098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:45.530140 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:45.653161 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.150841 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.708808 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.209076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.808315 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.311406 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.077992 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:48.092203 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:48.092273 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:48.133136 1157708 cri.go:89] found id: ""
	I0318 13:51:48.133172 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.133183 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:48.133191 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:48.133259 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:48.177727 1157708 cri.go:89] found id: ""
	I0318 13:51:48.177756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.177768 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:48.177775 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:48.177843 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:48.217574 1157708 cri.go:89] found id: ""
	I0318 13:51:48.217600 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.217608 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:48.217614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:48.217676 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:48.258900 1157708 cri.go:89] found id: ""
	I0318 13:51:48.258933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.258947 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:48.258955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:48.259046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:48.299527 1157708 cri.go:89] found id: ""
	I0318 13:51:48.299562 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.299573 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:48.299581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:48.299650 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:48.339692 1157708 cri.go:89] found id: ""
	I0318 13:51:48.339723 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.339732 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:48.339740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:48.339791 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:48.378737 1157708 cri.go:89] found id: ""
	I0318 13:51:48.378764 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.378773 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:48.378779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:48.378841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:48.414593 1157708 cri.go:89] found id: ""
	I0318 13:51:48.414621 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.414629 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:48.414639 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:48.414654 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:48.430232 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:48.430264 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:48.513313 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:48.513335 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:48.513353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:48.594681 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:48.594721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:48.638681 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:48.638720 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.189510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:51.204296 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:51.204383 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:51.248285 1157708 cri.go:89] found id: ""
	I0318 13:51:51.248311 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.248331 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:51.248340 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:51.248414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:51.289022 1157708 cri.go:89] found id: ""
	I0318 13:51:51.289055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.289068 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:51.289077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:51.289144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:51.329367 1157708 cri.go:89] found id: ""
	I0318 13:51:51.329405 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.329414 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:51.329420 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:51.329477 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:51.370909 1157708 cri.go:89] found id: ""
	I0318 13:51:51.370948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.370960 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:51.370970 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:51.371043 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:51.419447 1157708 cri.go:89] found id: ""
	I0318 13:51:51.419486 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.419498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:51.419506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:51.419573 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:51.466302 1157708 cri.go:89] found id: ""
	I0318 13:51:51.466336 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.466348 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:51.466356 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:51.466441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:51.505593 1157708 cri.go:89] found id: ""
	I0318 13:51:51.505631 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.505644 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:51.505652 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:51.505724 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:51.543815 1157708 cri.go:89] found id: ""
	I0318 13:51:51.543843 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.543852 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:51.543863 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:51.543885 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.596271 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:51.596305 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:51.612441 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:51.612477 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:51.690591 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:51.690614 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:51.690631 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:51.771781 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:51.771821 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:50.650088 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:52.650307 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.710583 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.208629 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.808743 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.309915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.319626 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:54.334041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:54.334113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:54.372090 1157708 cri.go:89] found id: ""
	I0318 13:51:54.372120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.372132 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:54.372139 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:54.372196 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:54.412513 1157708 cri.go:89] found id: ""
	I0318 13:51:54.412567 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.412580 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:54.412588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:54.412662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:54.453143 1157708 cri.go:89] found id: ""
	I0318 13:51:54.453176 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.453188 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:54.453196 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:54.453262 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:54.497908 1157708 cri.go:89] found id: ""
	I0318 13:51:54.497940 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.497949 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:54.497957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:54.498025 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:54.539044 1157708 cri.go:89] found id: ""
	I0318 13:51:54.539072 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.539081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:54.539086 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:54.539151 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:54.578916 1157708 cri.go:89] found id: ""
	I0318 13:51:54.578944 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.578951 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:54.578958 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:54.579027 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:54.617339 1157708 cri.go:89] found id: ""
	I0318 13:51:54.617366 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.617375 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:54.617380 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:54.617436 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:54.661288 1157708 cri.go:89] found id: ""
	I0318 13:51:54.661309 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.661318 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:54.661328 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:54.661344 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:54.740710 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:54.740751 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:54.789136 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:54.789176 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.844585 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:54.844627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:54.860304 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:54.860351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:54.945305 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:57.445800 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:57.459294 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:57.459368 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:57.497411 1157708 cri.go:89] found id: ""
	I0318 13:51:57.497441 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.497449 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:57.497456 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:57.497521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:57.535629 1157708 cri.go:89] found id: ""
	I0318 13:51:57.535663 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.535675 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:57.535684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:57.535749 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:57.572980 1157708 cri.go:89] found id: ""
	I0318 13:51:57.573008 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.573017 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:57.573023 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:57.573071 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:57.622949 1157708 cri.go:89] found id: ""
	I0318 13:51:57.622984 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.622997 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:57.623005 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:57.623070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:57.659877 1157708 cri.go:89] found id: ""
	I0318 13:51:57.659910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.659921 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:57.659928 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:57.659991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:57.705399 1157708 cri.go:89] found id: ""
	I0318 13:51:57.705481 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.705495 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:57.705504 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:57.705566 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:57.748035 1157708 cri.go:89] found id: ""
	I0318 13:51:57.748062 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.748073 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:57.748084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:57.748144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:57.801942 1157708 cri.go:89] found id: ""
	I0318 13:51:57.801976 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.801987 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:57.801999 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:57.802017 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:57.900157 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:57.900204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:57.946179 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:57.946219 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.651363 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:57.151268 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.208925 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.708089 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.807605 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.808479 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.307740 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.000369 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:58.000412 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:58.016179 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:58.016211 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:58.101766 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:00.602151 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:00.617466 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:00.617531 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:00.661294 1157708 cri.go:89] found id: ""
	I0318 13:52:00.661328 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.661336 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:00.661342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:00.661400 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:00.706227 1157708 cri.go:89] found id: ""
	I0318 13:52:00.706257 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.706267 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:00.706275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:00.706342 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:00.746482 1157708 cri.go:89] found id: ""
	I0318 13:52:00.746515 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.746528 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:00.746536 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:00.746600 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:00.789242 1157708 cri.go:89] found id: ""
	I0318 13:52:00.789272 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.789281 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:00.789287 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:00.789348 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:00.832463 1157708 cri.go:89] found id: ""
	I0318 13:52:00.832503 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.832514 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:00.832522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:00.832581 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:00.869790 1157708 cri.go:89] found id: ""
	I0318 13:52:00.869819 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.869830 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:00.869839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:00.869904 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:00.909656 1157708 cri.go:89] found id: ""
	I0318 13:52:00.909685 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.909693 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:00.909700 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:00.909754 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:00.953818 1157708 cri.go:89] found id: ""
	I0318 13:52:00.953856 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.953868 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:00.953882 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:00.953898 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:01.032822 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:01.032848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:01.032865 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:01.111701 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:01.111747 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:01.168270 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:01.168300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:01.220376 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:01.220408 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:59.650359 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.650627 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.651830 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:00.709561 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.207829 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.808915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:06.307915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.737354 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:03.756282 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:03.756382 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:03.804716 1157708 cri.go:89] found id: ""
	I0318 13:52:03.804757 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.804768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:03.804777 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:03.804838 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:03.864559 1157708 cri.go:89] found id: ""
	I0318 13:52:03.864596 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.864609 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:03.864617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:03.864687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:03.918397 1157708 cri.go:89] found id: ""
	I0318 13:52:03.918425 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.918433 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:03.918439 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:03.918504 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:03.961729 1157708 cri.go:89] found id: ""
	I0318 13:52:03.961762 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.961773 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:03.961780 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:03.961856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:04.006261 1157708 cri.go:89] found id: ""
	I0318 13:52:04.006299 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.006311 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:04.006319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:04.006404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:04.050284 1157708 cri.go:89] found id: ""
	I0318 13:52:04.050313 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.050321 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:04.050327 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:04.050384 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:04.093789 1157708 cri.go:89] found id: ""
	I0318 13:52:04.093827 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.093839 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:04.093847 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:04.093916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:04.135047 1157708 cri.go:89] found id: ""
	I0318 13:52:04.135091 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.135110 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:04.135124 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:04.135142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:04.192899 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:04.192937 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:04.209080 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:04.209130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:04.286388 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:04.286413 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:04.286428 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:04.371836 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:04.371877 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:06.923039 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:06.938743 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:06.938826 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:06.984600 1157708 cri.go:89] found id: ""
	I0318 13:52:06.984634 1157708 logs.go:276] 0 containers: []
	W0318 13:52:06.984646 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:06.984655 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:06.984721 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:07.023849 1157708 cri.go:89] found id: ""
	I0318 13:52:07.023891 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.023914 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:07.023922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:07.023984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:07.071972 1157708 cri.go:89] found id: ""
	I0318 13:52:07.072002 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.072015 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:07.072022 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:07.072087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:07.109070 1157708 cri.go:89] found id: ""
	I0318 13:52:07.109105 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.109118 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:07.109126 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:07.109183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:07.149879 1157708 cri.go:89] found id: ""
	I0318 13:52:07.149910 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.149918 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:07.149925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:07.149990 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:07.195946 1157708 cri.go:89] found id: ""
	I0318 13:52:07.195976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.195987 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:07.195995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:07.196062 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:07.238126 1157708 cri.go:89] found id: ""
	I0318 13:52:07.238152 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.238162 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:07.238168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:07.238233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:07.278218 1157708 cri.go:89] found id: ""
	I0318 13:52:07.278255 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.278268 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:07.278282 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:07.278300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:07.294926 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:07.294955 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:07.383431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:07.383455 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:07.383468 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:07.467306 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:07.467348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:07.515996 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:07.516028 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:06.151546 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.162392 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:05.208765 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:07.210243 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:09.708076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.309045 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.807773 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.071945 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:10.088587 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:10.088654 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:10.130528 1157708 cri.go:89] found id: ""
	I0318 13:52:10.130566 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.130579 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:10.130588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:10.130663 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:10.173113 1157708 cri.go:89] found id: ""
	I0318 13:52:10.173150 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.173168 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:10.173178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:10.173243 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:10.218941 1157708 cri.go:89] found id: ""
	I0318 13:52:10.218976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.218987 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:10.218996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:10.219068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:10.262331 1157708 cri.go:89] found id: ""
	I0318 13:52:10.262368 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.262381 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:10.262389 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:10.262460 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:10.303329 1157708 cri.go:89] found id: ""
	I0318 13:52:10.303363 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.303378 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:10.303386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:10.303457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:10.344458 1157708 cri.go:89] found id: ""
	I0318 13:52:10.344486 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.344497 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:10.344505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:10.344567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:10.386753 1157708 cri.go:89] found id: ""
	I0318 13:52:10.386786 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.386797 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:10.386806 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:10.386876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:10.425922 1157708 cri.go:89] found id: ""
	I0318 13:52:10.425954 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.425965 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:10.425978 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:10.426000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:10.441134 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:10.441168 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:10.514865 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:10.514899 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:10.514916 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:10.592061 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:10.592105 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:10.642900 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:10.642935 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:10.651432 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.150537 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.208498 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:14.209684 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.808250 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:15.308639 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.199176 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:13.215155 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:13.215232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:13.256107 1157708 cri.go:89] found id: ""
	I0318 13:52:13.256139 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.256151 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:13.256160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:13.256231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:13.296562 1157708 cri.go:89] found id: ""
	I0318 13:52:13.296597 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.296608 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:13.296615 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:13.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:13.336633 1157708 cri.go:89] found id: ""
	I0318 13:52:13.336662 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.336672 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:13.336678 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:13.336737 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:13.382597 1157708 cri.go:89] found id: ""
	I0318 13:52:13.382639 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.382654 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:13.382663 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:13.382733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:13.430257 1157708 cri.go:89] found id: ""
	I0318 13:52:13.430292 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.430304 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:13.430312 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:13.430373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:13.466854 1157708 cri.go:89] found id: ""
	I0318 13:52:13.466881 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.466889 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:13.466896 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:13.466945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:13.510297 1157708 cri.go:89] found id: ""
	I0318 13:52:13.510333 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.510344 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:13.510352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:13.510420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:13.551476 1157708 cri.go:89] found id: ""
	I0318 13:52:13.551508 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.551517 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:13.551528 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:13.551542 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:13.634561 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:13.634585 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:13.634598 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:13.720088 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:13.720129 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:13.760621 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:13.760659 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:13.817311 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:13.817350 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.334094 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:16.349779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:16.349866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:16.394131 1157708 cri.go:89] found id: ""
	I0318 13:52:16.394157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.394167 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:16.394175 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:16.394239 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:16.438185 1157708 cri.go:89] found id: ""
	I0318 13:52:16.438232 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.438245 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:16.438264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:16.438335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:16.476872 1157708 cri.go:89] found id: ""
	I0318 13:52:16.476920 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.476932 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:16.476939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:16.477007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:16.518226 1157708 cri.go:89] found id: ""
	I0318 13:52:16.518253 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.518262 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:16.518269 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:16.518327 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:16.559119 1157708 cri.go:89] found id: ""
	I0318 13:52:16.559160 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.559174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:16.559182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:16.559260 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:16.600050 1157708 cri.go:89] found id: ""
	I0318 13:52:16.600079 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.600088 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:16.600094 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:16.600160 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:16.640621 1157708 cri.go:89] found id: ""
	I0318 13:52:16.640649 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.640660 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:16.640668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:16.640733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:16.680541 1157708 cri.go:89] found id: ""
	I0318 13:52:16.680571 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.680580 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:16.680590 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:16.680602 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:16.766378 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:16.766415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:16.811846 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:16.811883 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:16.871940 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:16.871981 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.887494 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:16.887521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:16.961924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:15.650599 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.650902 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:16.710336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.207426 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.807338 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.809418 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.462316 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:19.478819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:19.478885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:19.523280 1157708 cri.go:89] found id: ""
	I0318 13:52:19.523314 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.523334 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:19.523342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:19.523417 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:19.560675 1157708 cri.go:89] found id: ""
	I0318 13:52:19.560708 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.560717 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:19.560725 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:19.560790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:19.598739 1157708 cri.go:89] found id: ""
	I0318 13:52:19.598766 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.598773 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:19.598781 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:19.598846 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:19.639928 1157708 cri.go:89] found id: ""
	I0318 13:52:19.639960 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.639969 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:19.639975 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:19.640030 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:19.686084 1157708 cri.go:89] found id: ""
	I0318 13:52:19.686134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.686153 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:19.686160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:19.686231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:19.725449 1157708 cri.go:89] found id: ""
	I0318 13:52:19.725481 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.725491 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:19.725497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:19.725559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:19.763855 1157708 cri.go:89] found id: ""
	I0318 13:52:19.763886 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.763897 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:19.763905 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:19.763976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:19.805783 1157708 cri.go:89] found id: ""
	I0318 13:52:19.805813 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.805824 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:19.805836 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:19.805852 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.883873 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:19.883914 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:19.926368 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:19.926406 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:19.981137 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:19.981181 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:19.996242 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:19.996269 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:20.077880 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:22.578045 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:22.594170 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:22.594247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:22.637241 1157708 cri.go:89] found id: ""
	I0318 13:52:22.637276 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.637289 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:22.637298 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:22.637363 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:22.679877 1157708 cri.go:89] found id: ""
	I0318 13:52:22.679904 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.679912 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:22.679918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:22.679981 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:22.721865 1157708 cri.go:89] found id: ""
	I0318 13:52:22.721890 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.721903 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:22.721912 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:22.721982 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:22.763208 1157708 cri.go:89] found id: ""
	I0318 13:52:22.763242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.763255 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:22.763264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:22.763329 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:22.802038 1157708 cri.go:89] found id: ""
	I0318 13:52:22.802071 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.802081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:22.802089 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:22.802170 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:22.841206 1157708 cri.go:89] found id: ""
	I0318 13:52:22.841242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.841254 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:22.841263 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:22.841328 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:22.885159 1157708 cri.go:89] found id: ""
	I0318 13:52:22.885197 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.885209 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:22.885218 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:22.885289 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:22.925346 1157708 cri.go:89] found id: ""
	I0318 13:52:22.925373 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.925382 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:22.925391 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:22.925407 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.654611 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.152365 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:21.208979 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.210660 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.308290 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:24.310006 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.006158 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:23.006193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:23.053932 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:23.053961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:23.107728 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:23.107768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:23.125708 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:23.125740 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:23.202609 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:25.703096 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:25.718617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:25.718689 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:25.756504 1157708 cri.go:89] found id: ""
	I0318 13:52:25.756530 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.756538 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:25.756544 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:25.756608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:25.795103 1157708 cri.go:89] found id: ""
	I0318 13:52:25.795140 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.795152 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:25.795160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:25.795240 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:25.839908 1157708 cri.go:89] found id: ""
	I0318 13:52:25.839945 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.839957 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:25.839971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:25.840038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:25.881677 1157708 cri.go:89] found id: ""
	I0318 13:52:25.881711 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.881723 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:25.881732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:25.881802 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:25.923356 1157708 cri.go:89] found id: ""
	I0318 13:52:25.923386 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.923397 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:25.923410 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:25.923469 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:25.961661 1157708 cri.go:89] found id: ""
	I0318 13:52:25.961693 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.961705 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:25.961713 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:25.961785 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:26.003198 1157708 cri.go:89] found id: ""
	I0318 13:52:26.003236 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.003248 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:26.003256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:26.003319 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:26.041436 1157708 cri.go:89] found id: ""
	I0318 13:52:26.041471 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.041483 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:26.041496 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:26.041515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:26.056679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:26.056716 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:26.143900 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:26.143926 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:26.143946 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:26.226929 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:26.226964 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:26.288519 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:26.288560 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:24.652661 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.152317 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:25.708488 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.708931 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:26.807624 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.809030 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.308980 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.846205 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:28.861117 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:28.861190 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:28.906990 1157708 cri.go:89] found id: ""
	I0318 13:52:28.907022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.907030 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:28.907036 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:28.907099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:28.946271 1157708 cri.go:89] found id: ""
	I0318 13:52:28.946309 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.946322 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:28.946332 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:28.946403 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:28.990158 1157708 cri.go:89] found id: ""
	I0318 13:52:28.990185 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.990193 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:28.990199 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:28.990251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:29.035089 1157708 cri.go:89] found id: ""
	I0318 13:52:29.035123 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.035134 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:29.035143 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:29.035209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:29.076991 1157708 cri.go:89] found id: ""
	I0318 13:52:29.077022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.077033 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:29.077041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:29.077104 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:29.117106 1157708 cri.go:89] found id: ""
	I0318 13:52:29.117134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.117150 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:29.117157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:29.117209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:29.159675 1157708 cri.go:89] found id: ""
	I0318 13:52:29.159704 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.159714 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:29.159722 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:29.159787 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:29.202130 1157708 cri.go:89] found id: ""
	I0318 13:52:29.202157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.202166 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:29.202176 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:29.202189 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:29.258343 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:29.258390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:29.275314 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:29.275360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:29.359842 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:29.359989 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:29.360036 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:29.446021 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:29.446072 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:31.990431 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:32.007443 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:32.007508 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:32.051028 1157708 cri.go:89] found id: ""
	I0318 13:52:32.051061 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.051070 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:32.051076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:32.051144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:32.092914 1157708 cri.go:89] found id: ""
	I0318 13:52:32.092950 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.092962 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:32.092972 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:32.093045 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:32.154257 1157708 cri.go:89] found id: ""
	I0318 13:52:32.154291 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.154302 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:32.154309 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:32.154375 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:32.200185 1157708 cri.go:89] found id: ""
	I0318 13:52:32.200224 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.200236 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:32.200244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:32.200309 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:32.248927 1157708 cri.go:89] found id: ""
	I0318 13:52:32.248961 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.248974 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:32.248982 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:32.249051 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:32.289829 1157708 cri.go:89] found id: ""
	I0318 13:52:32.289861 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.289870 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:32.289876 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:32.289934 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:32.334346 1157708 cri.go:89] found id: ""
	I0318 13:52:32.334379 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.334387 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:32.334393 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:32.334457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:32.378718 1157708 cri.go:89] found id: ""
	I0318 13:52:32.378761 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.378770 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:32.378780 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:32.378795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:32.434626 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:32.434667 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:32.451366 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:32.451402 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:32.532868 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:32.532907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:32.532924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:32.617556 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:32.617597 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:29.650409 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.651019 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:30.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:32.214101 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:34.710602 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:33.807499 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.807738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.165067 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:35.181325 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:35.181404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:35.220570 1157708 cri.go:89] found id: ""
	I0318 13:52:35.220601 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.220612 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:35.220619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:35.220684 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:35.263798 1157708 cri.go:89] found id: ""
	I0318 13:52:35.263830 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.263841 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:35.263848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:35.263915 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:35.309447 1157708 cri.go:89] found id: ""
	I0318 13:52:35.309477 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.309489 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:35.309497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:35.309567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:35.353444 1157708 cri.go:89] found id: ""
	I0318 13:52:35.353472 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.353484 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:35.353493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:35.353556 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:35.394563 1157708 cri.go:89] found id: ""
	I0318 13:52:35.394591 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.394599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:35.394604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:35.394662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:35.433866 1157708 cri.go:89] found id: ""
	I0318 13:52:35.433899 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.433908 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:35.433915 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:35.433970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:35.482769 1157708 cri.go:89] found id: ""
	I0318 13:52:35.482808 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.482820 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:35.482829 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:35.482899 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:35.521465 1157708 cri.go:89] found id: ""
	I0318 13:52:35.521498 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.521509 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:35.521520 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:35.521534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:35.577759 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:35.577799 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:35.593052 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:35.593084 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:35.672751 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:35.672773 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:35.672787 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:35.752118 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:35.752171 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:34.157429 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:36.650725 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.652096 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:37.209435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:39.710020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.312679 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:40.807379 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.296677 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:38.312261 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:38.312365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:38.350328 1157708 cri.go:89] found id: ""
	I0318 13:52:38.350362 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.350374 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:38.350382 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:38.350457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:38.389891 1157708 cri.go:89] found id: ""
	I0318 13:52:38.389927 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.389939 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:38.389947 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:38.390005 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:38.430268 1157708 cri.go:89] found id: ""
	I0318 13:52:38.430296 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.430305 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:38.430311 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:38.430365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:38.470830 1157708 cri.go:89] found id: ""
	I0318 13:52:38.470859 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.470873 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:38.470880 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:38.470945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:38.510501 1157708 cri.go:89] found id: ""
	I0318 13:52:38.510538 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.510552 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:38.510560 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:38.510618 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:38.594899 1157708 cri.go:89] found id: ""
	I0318 13:52:38.594926 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.594935 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:38.594942 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:38.595021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:38.649095 1157708 cri.go:89] found id: ""
	I0318 13:52:38.649121 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.649129 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:38.649136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:38.649192 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:38.695263 1157708 cri.go:89] found id: ""
	I0318 13:52:38.695295 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.695307 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:38.695320 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:38.695336 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:38.780624 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:38.780666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:38.825294 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:38.825335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:38.877548 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:38.877596 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:38.893289 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:38.893319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:38.971752 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.472865 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:41.487371 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:41.487484 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:41.524691 1157708 cri.go:89] found id: ""
	I0318 13:52:41.524724 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.524737 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:41.524746 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:41.524812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:41.564094 1157708 cri.go:89] found id: ""
	I0318 13:52:41.564125 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.564137 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:41.564145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:41.564210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:41.600019 1157708 cri.go:89] found id: ""
	I0318 13:52:41.600047 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.600058 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:41.600064 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:41.600142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:41.638320 1157708 cri.go:89] found id: ""
	I0318 13:52:41.638350 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.638363 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:41.638372 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:41.638438 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:41.680763 1157708 cri.go:89] found id: ""
	I0318 13:52:41.680798 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.680810 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:41.680818 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:41.680894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:41.720645 1157708 cri.go:89] found id: ""
	I0318 13:52:41.720674 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.720683 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:41.720690 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:41.720741 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:41.759121 1157708 cri.go:89] found id: ""
	I0318 13:52:41.759151 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.759185 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:41.759195 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:41.759264 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:41.797006 1157708 cri.go:89] found id: ""
	I0318 13:52:41.797034 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.797043 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:41.797053 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:41.797070 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:41.853315 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:41.853353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:41.869920 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:41.869952 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:41.947187 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.947219 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:41.947235 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:42.025475 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:42.025515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:41.151466 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.153616 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:42.207999 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.709760 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.310812 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:45.808394 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.574724 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:44.598990 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:44.599068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:44.649051 1157708 cri.go:89] found id: ""
	I0318 13:52:44.649137 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.649168 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:44.649180 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:44.649254 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:44.686423 1157708 cri.go:89] found id: ""
	I0318 13:52:44.686459 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.686468 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:44.686473 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:44.686536 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:44.726534 1157708 cri.go:89] found id: ""
	I0318 13:52:44.726564 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.726575 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:44.726583 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:44.726653 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:44.771190 1157708 cri.go:89] found id: ""
	I0318 13:52:44.771220 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.771232 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:44.771240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:44.771311 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:44.811577 1157708 cri.go:89] found id: ""
	I0318 13:52:44.811602 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.811611 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:44.811618 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:44.811677 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:44.850717 1157708 cri.go:89] found id: ""
	I0318 13:52:44.850744 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.850756 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:44.850765 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:44.850824 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:44.890294 1157708 cri.go:89] found id: ""
	I0318 13:52:44.890321 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.890330 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:44.890344 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:44.890401 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:44.930690 1157708 cri.go:89] found id: ""
	I0318 13:52:44.930720 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.930730 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:44.930741 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:44.930757 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:44.946509 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:44.946544 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:45.029748 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:45.029777 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:45.029795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:45.111348 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:45.111392 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:45.165156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:45.165193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:47.720701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:47.734457 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:47.734520 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:47.771273 1157708 cri.go:89] found id: ""
	I0318 13:52:47.771304 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.771313 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:47.771319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:47.771370 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:47.813779 1157708 cri.go:89] found id: ""
	I0318 13:52:47.813806 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.813816 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:47.813824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:47.813892 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:47.855547 1157708 cri.go:89] found id: ""
	I0318 13:52:47.855576 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.855584 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:47.855590 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:47.855640 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:47.892651 1157708 cri.go:89] found id: ""
	I0318 13:52:47.892684 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.892692 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:47.892697 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:47.892752 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:47.935457 1157708 cri.go:89] found id: ""
	I0318 13:52:47.935488 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.935498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:47.935505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:47.935567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:47.969335 1157708 cri.go:89] found id: ""
	I0318 13:52:47.969361 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.969370 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:47.969377 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:47.969441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:45.651171 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.151833 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:47.209014 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:49.710231 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.310467 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:50.807495 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.007305 1157708 cri.go:89] found id: ""
	I0318 13:52:48.007339 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.007349 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:48.007355 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:48.007416 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:48.050230 1157708 cri.go:89] found id: ""
	I0318 13:52:48.050264 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.050276 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:48.050289 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:48.050304 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:48.106946 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:48.106993 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:48.123805 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:48.123837 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:48.201881 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:48.201907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:48.201920 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:48.281533 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:48.281577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:50.829561 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:50.847462 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:50.847555 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:50.889731 1157708 cri.go:89] found id: ""
	I0318 13:52:50.889759 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.889768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:50.889774 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:50.889831 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:50.928176 1157708 cri.go:89] found id: ""
	I0318 13:52:50.928210 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.928222 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:50.928231 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:50.928294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:50.965737 1157708 cri.go:89] found id: ""
	I0318 13:52:50.965772 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.965786 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:50.965794 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:50.965866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:51.008038 1157708 cri.go:89] found id: ""
	I0318 13:52:51.008072 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.008081 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:51.008087 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:51.008159 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:51.050310 1157708 cri.go:89] found id: ""
	I0318 13:52:51.050340 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.050355 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:51.050363 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:51.050431 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:51.090514 1157708 cri.go:89] found id: ""
	I0318 13:52:51.090541 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.090550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:51.090556 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:51.090608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:51.131278 1157708 cri.go:89] found id: ""
	I0318 13:52:51.131305 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.131313 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:51.131320 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:51.131381 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:51.173370 1157708 cri.go:89] found id: ""
	I0318 13:52:51.173400 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.173411 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:51.173437 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:51.173464 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:51.260155 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:51.260204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:51.309963 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:51.309998 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:51.367838 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:51.367889 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:51.382542 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:51.382570 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:51.459258 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:50.650524 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.651804 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.208655 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:54.209701 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.808292 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:55.309417 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:53.960212 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:53.978939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:53.979004 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:54.030003 1157708 cri.go:89] found id: ""
	I0318 13:52:54.030038 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.030052 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:54.030060 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:54.030134 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:54.073487 1157708 cri.go:89] found id: ""
	I0318 13:52:54.073523 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.073535 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:54.073543 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:54.073611 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:54.115982 1157708 cri.go:89] found id: ""
	I0318 13:52:54.116010 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.116022 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:54.116029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:54.116099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:54.158320 1157708 cri.go:89] found id: ""
	I0318 13:52:54.158348 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.158359 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:54.158366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:54.158433 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:54.198911 1157708 cri.go:89] found id: ""
	I0318 13:52:54.198939 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.198948 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:54.198955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:54.199010 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:54.240628 1157708 cri.go:89] found id: ""
	I0318 13:52:54.240659 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.240671 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:54.240679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:54.240750 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:54.279377 1157708 cri.go:89] found id: ""
	I0318 13:52:54.279409 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.279418 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:54.279424 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:54.279493 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:54.324160 1157708 cri.go:89] found id: ""
	I0318 13:52:54.324192 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.324205 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:54.324218 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:54.324237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:54.371487 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:54.371527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:54.423487 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:54.423526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:54.438773 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:54.438800 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:54.518788 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:54.518810 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:54.518825 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.103590 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:57.118866 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:57.118932 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:57.159354 1157708 cri.go:89] found id: ""
	I0318 13:52:57.159383 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.159393 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:57.159399 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:57.159458 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:57.201114 1157708 cri.go:89] found id: ""
	I0318 13:52:57.201148 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.201159 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:57.201167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:57.201233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:57.242172 1157708 cri.go:89] found id: ""
	I0318 13:52:57.242207 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.242217 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:57.242224 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:57.242287 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:57.282578 1157708 cri.go:89] found id: ""
	I0318 13:52:57.282617 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.282629 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:57.282637 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:57.282706 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:57.323682 1157708 cri.go:89] found id: ""
	I0318 13:52:57.323707 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.323715 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:57.323721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:57.323771 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:57.364946 1157708 cri.go:89] found id: ""
	I0318 13:52:57.364980 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.364991 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:57.365003 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:57.365076 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:57.407466 1157708 cri.go:89] found id: ""
	I0318 13:52:57.407495 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.407505 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:57.407511 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:57.407568 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:57.454663 1157708 cri.go:89] found id: ""
	I0318 13:52:57.454692 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.454701 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:57.454710 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:57.454722 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:57.509591 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:57.509633 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:57.525125 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:57.525155 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:57.602819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:57.602845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:57.602863 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.689001 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:57.689045 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:55.150589 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.152149 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:56.708493 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.208099 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.311780 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.312048 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:00.234252 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:00.249526 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:00.249615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:00.290131 1157708 cri.go:89] found id: ""
	I0318 13:53:00.290160 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.290171 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:00.290178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:00.290230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:00.337794 1157708 cri.go:89] found id: ""
	I0318 13:53:00.337828 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.337840 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:00.337848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:00.337907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:00.378188 1157708 cri.go:89] found id: ""
	I0318 13:53:00.378224 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.378236 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:00.378244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:00.378313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:00.418940 1157708 cri.go:89] found id: ""
	I0318 13:53:00.418972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.418981 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:00.418987 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:00.419039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:00.461471 1157708 cri.go:89] found id: ""
	I0318 13:53:00.461502 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.461511 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:00.461518 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:00.461572 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:00.498781 1157708 cri.go:89] found id: ""
	I0318 13:53:00.498812 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.498821 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:00.498827 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:00.498885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:00.540359 1157708 cri.go:89] found id: ""
	I0318 13:53:00.540395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.540407 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:00.540414 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:00.540480 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:00.583597 1157708 cri.go:89] found id: ""
	I0318 13:53:00.583628 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.583636 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:00.583648 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:00.583666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:00.639498 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:00.639534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:00.655764 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:00.655792 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:00.742351 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:00.742386 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:00.742400 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:00.825250 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:00.825298 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:59.651495 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.651843 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.709438 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.208439 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.810519 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.308525 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:03.373938 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:03.389723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:03.389796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:03.429675 1157708 cri.go:89] found id: ""
	I0318 13:53:03.429710 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.429723 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:03.429732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:03.429803 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:03.468732 1157708 cri.go:89] found id: ""
	I0318 13:53:03.468768 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.468780 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:03.468788 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:03.468841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:03.510562 1157708 cri.go:89] found id: ""
	I0318 13:53:03.510589 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.510598 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:03.510604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:03.510667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:03.549842 1157708 cri.go:89] found id: ""
	I0318 13:53:03.549896 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.549909 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:03.549918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:03.549984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:03.590036 1157708 cri.go:89] found id: ""
	I0318 13:53:03.590076 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.590086 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:03.590093 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:03.590146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:03.635546 1157708 cri.go:89] found id: ""
	I0318 13:53:03.635573 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.635585 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:03.635593 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:03.635660 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:03.678634 1157708 cri.go:89] found id: ""
	I0318 13:53:03.678663 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.678671 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:03.678677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:03.678735 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:03.719666 1157708 cri.go:89] found id: ""
	I0318 13:53:03.719698 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.719709 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:03.719721 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:03.719736 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:03.762353 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:03.762388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:03.817484 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:03.817521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:03.832820 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:03.832850 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:03.913094 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:03.913115 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:03.913130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:06.502556 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:06.517682 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:06.517745 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:06.562167 1157708 cri.go:89] found id: ""
	I0318 13:53:06.562202 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.562215 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:06.562223 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:06.562294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:06.601910 1157708 cri.go:89] found id: ""
	I0318 13:53:06.601945 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.601954 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:06.601962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:06.602022 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:06.640652 1157708 cri.go:89] found id: ""
	I0318 13:53:06.640683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.640694 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:06.640702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:06.640778 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:06.686781 1157708 cri.go:89] found id: ""
	I0318 13:53:06.686809 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.686818 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:06.686824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:06.686893 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:06.727080 1157708 cri.go:89] found id: ""
	I0318 13:53:06.727107 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.727115 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:06.727121 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:06.727173 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:06.764550 1157708 cri.go:89] found id: ""
	I0318 13:53:06.764575 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.764583 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:06.764589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:06.764641 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:06.803978 1157708 cri.go:89] found id: ""
	I0318 13:53:06.804009 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.804019 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:06.804027 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:06.804091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:06.843983 1157708 cri.go:89] found id: ""
	I0318 13:53:06.844016 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.844027 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:06.844040 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:06.844058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:06.905389 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:06.905424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:06.956888 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:06.956924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:06.973551 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:06.973594 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:07.045945 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:07.045973 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:07.045991 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:04.150852 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.151454 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.656073 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.211223 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.707939 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.808218 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.309991 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:11.310190 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.635227 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:09.650166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:09.650246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:09.695126 1157708 cri.go:89] found id: ""
	I0318 13:53:09.695153 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.695162 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:09.695168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:09.695221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:09.740475 1157708 cri.go:89] found id: ""
	I0318 13:53:09.740507 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.740516 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:09.740522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:09.740591 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:09.779078 1157708 cri.go:89] found id: ""
	I0318 13:53:09.779108 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.779119 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:09.779128 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:09.779186 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:09.821252 1157708 cri.go:89] found id: ""
	I0318 13:53:09.821285 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.821297 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:09.821306 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:09.821376 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:09.860500 1157708 cri.go:89] found id: ""
	I0318 13:53:09.860537 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.860550 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:09.860558 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:09.860622 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:09.903447 1157708 cri.go:89] found id: ""
	I0318 13:53:09.903475 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.903486 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:09.903494 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:09.903550 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:09.941620 1157708 cri.go:89] found id: ""
	I0318 13:53:09.941648 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.941661 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:09.941679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:09.941731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:09.980066 1157708 cri.go:89] found id: ""
	I0318 13:53:09.980101 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.980113 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:09.980125 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:09.980142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:10.036960 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:10.037000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:10.051329 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:10.051361 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:10.130896 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:10.130925 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:10.130942 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:10.212205 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:10.212236 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:12.754623 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:12.769956 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:12.770034 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:12.809006 1157708 cri.go:89] found id: ""
	I0318 13:53:12.809032 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.809043 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:12.809051 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:12.809113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:12.852354 1157708 cri.go:89] found id: ""
	I0318 13:53:12.852390 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.852400 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:12.852407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:12.852476 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:12.891891 1157708 cri.go:89] found id: ""
	I0318 13:53:12.891923 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.891933 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:12.891940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:12.891991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:12.931753 1157708 cri.go:89] found id: ""
	I0318 13:53:12.931785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.931795 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:12.931803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:12.931872 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:12.971622 1157708 cri.go:89] found id: ""
	I0318 13:53:12.971653 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.971662 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:12.971669 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:12.971731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:11.151234 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.157081 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:10.708177 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.209203 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.315183 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.808738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.009893 1157708 cri.go:89] found id: ""
	I0318 13:53:13.009930 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.009943 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:13.009952 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:13.010021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:13.045361 1157708 cri.go:89] found id: ""
	I0318 13:53:13.045396 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.045404 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:13.045411 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:13.045474 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:13.087659 1157708 cri.go:89] found id: ""
	I0318 13:53:13.087686 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.087696 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:13.087706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:13.087721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:13.129979 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:13.130014 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:13.183802 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:13.183836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:13.198808 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:13.198840 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:13.272736 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:13.272764 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:13.272783 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:15.870196 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:15.887480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:15.887551 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:15.923871 1157708 cri.go:89] found id: ""
	I0318 13:53:15.923899 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.923907 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:15.923913 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:15.923976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:15.963870 1157708 cri.go:89] found id: ""
	I0318 13:53:15.963906 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.963917 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:15.963925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:15.963997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:16.009781 1157708 cri.go:89] found id: ""
	I0318 13:53:16.009815 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.009828 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:16.009837 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:16.009905 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:16.047673 1157708 cri.go:89] found id: ""
	I0318 13:53:16.047708 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.047718 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:16.047727 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:16.047793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:16.089419 1157708 cri.go:89] found id: ""
	I0318 13:53:16.089447 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.089455 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:16.089461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:16.089511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:16.133563 1157708 cri.go:89] found id: ""
	I0318 13:53:16.133594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.133604 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:16.133611 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:16.133685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:16.174369 1157708 cri.go:89] found id: ""
	I0318 13:53:16.174404 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.174415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:16.174423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:16.174491 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:16.219334 1157708 cri.go:89] found id: ""
	I0318 13:53:16.219360 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.219367 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:16.219376 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:16.219389 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:16.273468 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:16.273507 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:16.288584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:16.288612 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:16.366575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:16.366602 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:16.366620 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:16.451031 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:16.451071 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:15.650907 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.151434 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.708015 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:17.710036 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.311437 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.807854 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.997536 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:19.014995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:19.015065 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:19.064686 1157708 cri.go:89] found id: ""
	I0318 13:53:19.064719 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.064731 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:19.064739 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:19.064793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:19.110598 1157708 cri.go:89] found id: ""
	I0318 13:53:19.110629 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.110640 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:19.110648 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:19.110739 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:19.156628 1157708 cri.go:89] found id: ""
	I0318 13:53:19.156652 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.156660 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:19.156668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:19.156730 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:19.205993 1157708 cri.go:89] found id: ""
	I0318 13:53:19.206029 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.206042 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:19.206049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:19.206118 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:19.253902 1157708 cri.go:89] found id: ""
	I0318 13:53:19.253935 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.253952 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:19.253960 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:19.254036 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:19.296550 1157708 cri.go:89] found id: ""
	I0318 13:53:19.296583 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.296594 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:19.296602 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:19.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:19.337316 1157708 cri.go:89] found id: ""
	I0318 13:53:19.337349 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.337360 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:19.337369 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:19.337446 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:19.381503 1157708 cri.go:89] found id: ""
	I0318 13:53:19.381546 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.381565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:19.381579 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:19.381603 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:19.461665 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:19.461691 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:19.461707 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:19.548291 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:19.548348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:19.591296 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:19.591335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:19.648740 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:19.648776 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.164970 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:22.180740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:22.180806 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:22.223787 1157708 cri.go:89] found id: ""
	I0318 13:53:22.223820 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.223833 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:22.223840 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:22.223908 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:22.266751 1157708 cri.go:89] found id: ""
	I0318 13:53:22.266785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.266797 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:22.266805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:22.266876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:22.311669 1157708 cri.go:89] found id: ""
	I0318 13:53:22.311701 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.311712 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:22.311721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:22.311816 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:22.354687 1157708 cri.go:89] found id: ""
	I0318 13:53:22.354722 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.354733 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:22.354742 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:22.354807 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:22.395741 1157708 cri.go:89] found id: ""
	I0318 13:53:22.395767 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.395776 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:22.395782 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:22.395832 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:22.434506 1157708 cri.go:89] found id: ""
	I0318 13:53:22.434539 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.434550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:22.434559 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:22.434612 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:22.474583 1157708 cri.go:89] found id: ""
	I0318 13:53:22.474612 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.474621 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:22.474627 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:22.474690 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:22.521898 1157708 cri.go:89] found id: ""
	I0318 13:53:22.521943 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.521955 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:22.521968 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:22.521989 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.537679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:22.537711 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:22.619575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:22.619605 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:22.619621 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:22.704206 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:22.704265 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:22.753470 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:22.753502 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:20.650340 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.653036 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.213398 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.709150 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.808837 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.308831 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.311578 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:25.329917 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:25.329979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:25.373784 1157708 cri.go:89] found id: ""
	I0318 13:53:25.373818 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.373826 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:25.373833 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:25.373901 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:25.422490 1157708 cri.go:89] found id: ""
	I0318 13:53:25.422516 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.422526 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:25.422532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:25.422597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:25.459523 1157708 cri.go:89] found id: ""
	I0318 13:53:25.459552 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.459560 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:25.459567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:25.459627 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:25.495647 1157708 cri.go:89] found id: ""
	I0318 13:53:25.495683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.495695 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:25.495702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:25.495772 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:25.534582 1157708 cri.go:89] found id: ""
	I0318 13:53:25.534617 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.534626 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:25.534632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:25.534704 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:25.577526 1157708 cri.go:89] found id: ""
	I0318 13:53:25.577558 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.577566 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:25.577573 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:25.577687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:25.616403 1157708 cri.go:89] found id: ""
	I0318 13:53:25.616433 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.616445 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:25.616453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:25.616527 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:25.660444 1157708 cri.go:89] found id: ""
	I0318 13:53:25.660474 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.660482 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:25.660492 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:25.660506 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:25.715595 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:25.715641 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:25.730358 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:25.730390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:25.803153 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:25.803239 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:25.803261 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:25.885339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:25.885388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:25.150276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.151389 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.214042 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.710185 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.807095 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:29.807177 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:28.433506 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:28.449402 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:28.449481 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:28.490972 1157708 cri.go:89] found id: ""
	I0318 13:53:28.491007 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.491019 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:28.491028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:28.491094 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:28.531406 1157708 cri.go:89] found id: ""
	I0318 13:53:28.531439 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.531451 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:28.531460 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:28.531513 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:28.570299 1157708 cri.go:89] found id: ""
	I0318 13:53:28.570334 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.570345 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:28.570352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:28.570408 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:28.607950 1157708 cri.go:89] found id: ""
	I0318 13:53:28.607979 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.607987 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:28.607994 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:28.608066 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:28.648710 1157708 cri.go:89] found id: ""
	I0318 13:53:28.648744 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.648755 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:28.648762 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:28.648830 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:28.691071 1157708 cri.go:89] found id: ""
	I0318 13:53:28.691102 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.691114 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:28.691122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:28.691183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:28.734399 1157708 cri.go:89] found id: ""
	I0318 13:53:28.734438 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.734452 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:28.734461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:28.734548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:28.774859 1157708 cri.go:89] found id: ""
	I0318 13:53:28.774891 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.774902 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:28.774912 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:28.774927 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:28.831420 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:28.831459 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:28.847970 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:28.848008 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:28.926007 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:28.926034 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:28.926051 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:29.007525 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:29.007577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:31.555401 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:31.570964 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:31.571046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:31.611400 1157708 cri.go:89] found id: ""
	I0318 13:53:31.611427 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.611438 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:31.611445 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:31.611510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:31.654572 1157708 cri.go:89] found id: ""
	I0318 13:53:31.654602 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.654614 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:31.654622 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:31.654725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:31.692649 1157708 cri.go:89] found id: ""
	I0318 13:53:31.692673 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.692681 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:31.692686 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:31.692748 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:31.732208 1157708 cri.go:89] found id: ""
	I0318 13:53:31.732233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.732244 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:31.732253 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:31.732320 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:31.774132 1157708 cri.go:89] found id: ""
	I0318 13:53:31.774163 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.774172 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:31.774178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:31.774234 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:31.813558 1157708 cri.go:89] found id: ""
	I0318 13:53:31.813582 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.813590 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:31.813597 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:31.813651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:31.862024 1157708 cri.go:89] found id: ""
	I0318 13:53:31.862057 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.862070 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:31.862077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:31.862146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:31.903941 1157708 cri.go:89] found id: ""
	I0318 13:53:31.903972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.903982 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:31.903992 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:31.904006 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:31.957327 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:31.957366 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:31.973337 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:31.973380 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:32.053702 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:32.053730 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:32.053744 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:32.134859 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:32.134911 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:29.649648 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.651426 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.651936 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:30.208512 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:32.709020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.808276 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.811370 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:36.314374 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:34.683335 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:34.700383 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:34.700490 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:34.744387 1157708 cri.go:89] found id: ""
	I0318 13:53:34.744420 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.744432 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:34.744441 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:34.744509 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:34.788122 1157708 cri.go:89] found id: ""
	I0318 13:53:34.788150 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.788160 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:34.788166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:34.788221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:34.834760 1157708 cri.go:89] found id: ""
	I0318 13:53:34.834795 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.834808 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:34.834817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:34.834894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:34.882028 1157708 cri.go:89] found id: ""
	I0318 13:53:34.882062 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.882073 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:34.882081 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:34.882150 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:34.933339 1157708 cri.go:89] found id: ""
	I0318 13:53:34.933364 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.933374 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:34.933384 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:34.933451 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:34.972362 1157708 cri.go:89] found id: ""
	I0318 13:53:34.972395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.972407 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:34.972416 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:34.972486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:35.008949 1157708 cri.go:89] found id: ""
	I0318 13:53:35.008986 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.008999 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:35.009007 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:35.009080 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:35.054698 1157708 cri.go:89] found id: ""
	I0318 13:53:35.054733 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.054742 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:35.054756 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:35.054770 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:35.109391 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:35.109450 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:35.126785 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:35.126818 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:35.214303 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:35.214329 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:35.214342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:35.298705 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:35.298750 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:37.843701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:37.859330 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:37.859415 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:37.903428 1157708 cri.go:89] found id: ""
	I0318 13:53:37.903466 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.903479 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:37.903497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:37.903560 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:37.943687 1157708 cri.go:89] found id: ""
	I0318 13:53:37.943716 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.943727 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:37.943735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:37.943804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:37.986201 1157708 cri.go:89] found id: ""
	I0318 13:53:37.986233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.986244 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:37.986252 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:37.986322 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:36.151976 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.152281 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:35.209205 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:37.709122 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.806794 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.807552 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.026776 1157708 cri.go:89] found id: ""
	I0318 13:53:38.026813 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.026825 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:38.026832 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:38.026907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:38.073057 1157708 cri.go:89] found id: ""
	I0318 13:53:38.073088 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.073098 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:38.073105 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:38.073172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:38.110576 1157708 cri.go:89] found id: ""
	I0318 13:53:38.110611 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.110624 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:38.110632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:38.110702 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:38.154293 1157708 cri.go:89] found id: ""
	I0318 13:53:38.154319 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.154327 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:38.154338 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:38.154414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:38.195407 1157708 cri.go:89] found id: ""
	I0318 13:53:38.195434 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.195444 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:38.195454 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:38.195469 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:38.254159 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:38.254210 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:38.269143 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:38.269175 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:38.349819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:38.349845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:38.349864 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:38.435121 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:38.435164 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.982438 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:40.998483 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:40.998559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:41.037470 1157708 cri.go:89] found id: ""
	I0318 13:53:41.037497 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.037506 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:41.037512 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:41.037583 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:41.078428 1157708 cri.go:89] found id: ""
	I0318 13:53:41.078463 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.078473 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:41.078482 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:41.078548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:41.121342 1157708 cri.go:89] found id: ""
	I0318 13:53:41.121371 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.121382 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:41.121391 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:41.121482 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:41.164124 1157708 cri.go:89] found id: ""
	I0318 13:53:41.164149 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.164159 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:41.164167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:41.164229 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:41.210294 1157708 cri.go:89] found id: ""
	I0318 13:53:41.210321 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.210329 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:41.210336 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:41.210407 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:41.253934 1157708 cri.go:89] found id: ""
	I0318 13:53:41.253957 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.253967 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:41.253973 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:41.254039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:41.298817 1157708 cri.go:89] found id: ""
	I0318 13:53:41.298849 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.298861 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:41.298870 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:41.298936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:41.344109 1157708 cri.go:89] found id: ""
	I0318 13:53:41.344137 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.344146 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:41.344156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:41.344170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:41.401026 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:41.401061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:41.416197 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:41.416229 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:41.495349 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:41.495375 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:41.495393 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:41.578201 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:41.578253 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.651687 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:43.152619 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.208445 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.208613 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.210573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.808665 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:45.309099 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.126601 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:44.140971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:44.141048 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:44.184758 1157708 cri.go:89] found id: ""
	I0318 13:53:44.184786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.184794 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:44.184801 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:44.184851 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:44.230793 1157708 cri.go:89] found id: ""
	I0318 13:53:44.230824 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.230836 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:44.230842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:44.230916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:44.269561 1157708 cri.go:89] found id: ""
	I0318 13:53:44.269594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.269606 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:44.269614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:44.269680 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:44.310847 1157708 cri.go:89] found id: ""
	I0318 13:53:44.310878 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.310889 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:44.310898 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:44.310970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:44.350827 1157708 cri.go:89] found id: ""
	I0318 13:53:44.350860 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.350878 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:44.350887 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:44.350956 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:44.389693 1157708 cri.go:89] found id: ""
	I0318 13:53:44.389721 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.389730 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:44.389735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:44.389804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:44.429254 1157708 cri.go:89] found id: ""
	I0318 13:53:44.429280 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.429289 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:44.429303 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:44.429354 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:44.468484 1157708 cri.go:89] found id: ""
	I0318 13:53:44.468513 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.468525 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:44.468538 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:44.468555 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:44.525012 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:44.525058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:44.541638 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:44.541668 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:44.621779 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:44.621801 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:44.621814 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:44.706797 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:44.706884 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:47.253569 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:47.268808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:47.268888 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:47.313191 1157708 cri.go:89] found id: ""
	I0318 13:53:47.313220 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.313232 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:47.313240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:47.313307 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:47.357567 1157708 cri.go:89] found id: ""
	I0318 13:53:47.357600 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.357611 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:47.357619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:47.357688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:47.392300 1157708 cri.go:89] found id: ""
	I0318 13:53:47.392341 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.392352 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:47.392366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:47.392437 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:47.432800 1157708 cri.go:89] found id: ""
	I0318 13:53:47.432830 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.432842 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:47.432857 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:47.432921 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:47.469563 1157708 cri.go:89] found id: ""
	I0318 13:53:47.469591 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.469599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:47.469605 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:47.469668 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:47.508770 1157708 cri.go:89] found id: ""
	I0318 13:53:47.508799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.508810 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:47.508820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:47.508880 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:47.549876 1157708 cri.go:89] found id: ""
	I0318 13:53:47.549909 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.549921 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:47.549930 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:47.549997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:47.591385 1157708 cri.go:89] found id: ""
	I0318 13:53:47.591413 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.591421 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:47.591431 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:47.591446 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:47.646284 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:47.646313 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:47.662609 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:47.662639 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:47.737371 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:47.737398 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:47.737415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:47.817311 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:47.817342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:45.652845 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.150199 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:46.707734 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.709977 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:47.807238 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.308767 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:50.380029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:50.380109 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:50.427452 1157708 cri.go:89] found id: ""
	I0318 13:53:50.427484 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.427496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:50.427505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:50.427579 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:50.466766 1157708 cri.go:89] found id: ""
	I0318 13:53:50.466793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.466801 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:50.466808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:50.466894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:50.506768 1157708 cri.go:89] found id: ""
	I0318 13:53:50.506799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.506811 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:50.506819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:50.506882 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:50.545554 1157708 cri.go:89] found id: ""
	I0318 13:53:50.545592 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.545605 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:50.545613 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:50.545685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:50.583949 1157708 cri.go:89] found id: ""
	I0318 13:53:50.583984 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.583995 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:50.584004 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:50.584083 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:50.624730 1157708 cri.go:89] found id: ""
	I0318 13:53:50.624763 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.624774 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:50.624783 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:50.624853 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:50.664300 1157708 cri.go:89] found id: ""
	I0318 13:53:50.664346 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.664358 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:50.664366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:50.664420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:50.702760 1157708 cri.go:89] found id: ""
	I0318 13:53:50.702793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.702805 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:50.702817 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:50.702833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:50.757188 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:50.757237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:50.772151 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:50.772195 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:50.856872 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:50.856898 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:50.856917 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:50.937706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:50.937749 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:50.654814 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.151970 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.710233 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.209443 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:52.309529 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:54.809399 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.481836 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:53.497792 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:53.497856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:53.535376 1157708 cri.go:89] found id: ""
	I0318 13:53:53.535411 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.535420 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:53.535427 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:53.535486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:53.575002 1157708 cri.go:89] found id: ""
	I0318 13:53:53.575030 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.575042 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:53.575050 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:53.575119 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:53.615880 1157708 cri.go:89] found id: ""
	I0318 13:53:53.615919 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.615931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:53.615940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:53.616007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:53.681746 1157708 cri.go:89] found id: ""
	I0318 13:53:53.681786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.681799 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:53.681810 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:53.681887 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:53.725219 1157708 cri.go:89] found id: ""
	I0318 13:53:53.725241 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.725250 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:53.725256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:53.725317 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:53.766969 1157708 cri.go:89] found id: ""
	I0318 13:53:53.767006 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.767018 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:53.767026 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:53.767091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:53.802103 1157708 cri.go:89] found id: ""
	I0318 13:53:53.802134 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.802145 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:53.802157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:53.802210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:53.843054 1157708 cri.go:89] found id: ""
	I0318 13:53:53.843085 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.843093 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:53.843103 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:53.843117 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:53.899794 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:53.899836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:53.915559 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:53.915592 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:53.996410 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:53.996438 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:53.996456 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:54.085588 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:54.085628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:56.632201 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:56.648183 1157708 kubeadm.go:591] duration metric: took 4m3.550073086s to restartPrimaryControlPlane
	W0318 13:53:56.648381 1157708 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:53:56.648422 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:53:55.152626 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.650951 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:55.209511 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.709324 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.710029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.666187 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.017736279s)
	I0318 13:53:59.666270 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:53:59.682887 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:53:59.694626 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:53:59.706577 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:53:59.706599 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:53:59.706648 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:53:59.718311 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:53:59.718371 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:53:59.729298 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:53:59.741351 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:53:59.741401 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:53:59.753652 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.765642 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:53:59.765695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.778055 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:53:59.789994 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:53:59.790042 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:53:59.801292 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:53:59.879414 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:53:59.879516 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:00.046477 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:00.046660 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:00.046819 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:00.257070 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:00.259191 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:00.259333 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:00.259434 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:00.259549 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:00.259658 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:00.259782 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:00.259857 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:00.259949 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:00.260033 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:00.260136 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:00.260244 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:00.260299 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:00.260394 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:00.423400 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:00.543983 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:00.796108 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:00.901121 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:00.918891 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:00.920502 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:00.920642 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:01.094176 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:53:57.306878 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.308670 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:01.096397 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:54:01.096539 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:01.107816 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:01.108753 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:01.109641 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:01.111913 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:00.150985 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.151139 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.208577 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.209527 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.701940 1157416 pod_ready.go:81] duration metric: took 4m0.000915275s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:04.701995 1157416 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:04.702022 1157416 pod_ready.go:38] duration metric: took 4m12.048388069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:04.702063 1157416 kubeadm.go:591] duration metric: took 4m22.220919415s to restartPrimaryControlPlane
	W0318 13:54:04.702133 1157416 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:04.702168 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:01.807445 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.308435 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.151252 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.152296 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.162574 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.809148 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.811335 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:11.306999 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:10.650696 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:12.651741 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:13.308835 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.807754 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.150875 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:17.653698 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:18.308137 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.308720 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.152545 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.650685 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.807655 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:24.807765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:25.150664 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:27.650092 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:26.808311 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:29.311683 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:31.301320 1157887 pod_ready.go:81] duration metric: took 4m0.001048401s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:31.301351 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:31.301372 1157887 pod_ready.go:38] duration metric: took 4m12.063560637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:31.301397 1157887 kubeadm.go:591] duration metric: took 4m19.202321881s to restartPrimaryControlPlane
	W0318 13:54:31.301478 1157887 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:31.301505 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:29.651334 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:32.152059 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:34.651230 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.151130 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.018723 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.31652367s)
	I0318 13:54:37.018822 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:54:37.036348 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:54:37.047932 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:54:37.058846 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:54:37.058875 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:54:37.058920 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:54:37.069333 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:54:37.069396 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:54:37.080053 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:54:37.090110 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:54:37.090170 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:54:37.101032 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.111052 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:54:37.111124 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.121867 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:54:37.132057 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:54:37.132104 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:54:37.143057 1157416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:54:37.368813 1157416 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:54:41.111826 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:54:41.111977 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:41.112236 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:39.151250 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:41.652026 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:43.652929 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.082340 1157416 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 13:54:46.082410 1157416 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:46.082482 1157416 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:46.082561 1157416 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:46.082639 1157416 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:46.082692 1157416 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:46.084374 1157416 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:46.084495 1157416 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:46.084584 1157416 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:46.084681 1157416 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:46.084767 1157416 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:46.084844 1157416 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:46.084933 1157416 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:46.085039 1157416 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:46.085131 1157416 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:46.085255 1157416 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:46.085344 1157416 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:46.085415 1157416 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:46.085491 1157416 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:46.085569 1157416 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:46.085637 1157416 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 13:54:46.085704 1157416 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:46.085791 1157416 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:46.085894 1157416 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:46.086010 1157416 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:46.086104 1157416 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:54:46.087481 1157416 out.go:204]   - Booting up control plane ...
	I0318 13:54:46.087576 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:46.087642 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:46.087698 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:46.087782 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:46.087865 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:46.087917 1157416 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:46.088051 1157416 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:46.088146 1157416 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003020 seconds
	I0318 13:54:46.088306 1157416 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:54:46.088501 1157416 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:54:46.088585 1157416 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:54:46.088770 1157416 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-537236 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:54:46.088826 1157416 kubeadm.go:309] [bootstrap-token] Using token: fk6yfh.vd0dmh72kd97vm2h
	I0318 13:54:46.091265 1157416 out.go:204]   - Configuring RBAC rules ...
	I0318 13:54:46.091375 1157416 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:54:46.091449 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:54:46.091656 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:54:46.091839 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:54:46.092014 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:54:46.092136 1157416 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:54:46.092289 1157416 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:54:46.092370 1157416 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:54:46.092436 1157416 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:54:46.092445 1157416 kubeadm.go:309] 
	I0318 13:54:46.092513 1157416 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:54:46.092522 1157416 kubeadm.go:309] 
	I0318 13:54:46.092588 1157416 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:54:46.092594 1157416 kubeadm.go:309] 
	I0318 13:54:46.092614 1157416 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:54:46.092704 1157416 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:54:46.092749 1157416 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:54:46.092755 1157416 kubeadm.go:309] 
	I0318 13:54:46.092805 1157416 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:54:46.092818 1157416 kubeadm.go:309] 
	I0318 13:54:46.092892 1157416 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:54:46.092906 1157416 kubeadm.go:309] 
	I0318 13:54:46.092982 1157416 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:54:46.093100 1157416 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:54:46.093212 1157416 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:54:46.093225 1157416 kubeadm.go:309] 
	I0318 13:54:46.093335 1157416 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:54:46.093448 1157416 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:54:46.093457 1157416 kubeadm.go:309] 
	I0318 13:54:46.093539 1157416 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.093684 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:54:46.093717 1157416 kubeadm.go:309] 	--control-plane 
	I0318 13:54:46.093723 1157416 kubeadm.go:309] 
	I0318 13:54:46.093848 1157416 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:54:46.093860 1157416 kubeadm.go:309] 
	I0318 13:54:46.093946 1157416 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.094071 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:54:46.094105 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:54:46.094119 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:54:46.095717 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:54:46.112502 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:46.112797 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:46.152713 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:48.651676 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.096953 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:54:46.127007 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:54:46.178588 1157416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:54:46.178768 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:46.178785 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-537236 minikube.k8s.io/updated_at=2024_03_18T13_54_46_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=no-preload-537236 minikube.k8s.io/primary=true
	I0318 13:54:46.231974 1157416 ops.go:34] apiserver oom_adj: -16
	I0318 13:54:46.582048 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.082295 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.582447 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.082146 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.583155 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.082463 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.583104 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.153753 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:53.654740 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:50.082163 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:50.582159 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.082921 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.582616 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.082686 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.582520 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.082920 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.582281 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.082711 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.582110 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.112956 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:56.113210 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:55.082805 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:55.583034 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.082777 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.582491 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.082739 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.582854 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.082715 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.189802 1157416 kubeadm.go:1107] duration metric: took 12.011111335s to wait for elevateKubeSystemPrivileges
	W0318 13:54:58.189865 1157416 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:54:58.189878 1157416 kubeadm.go:393] duration metric: took 5m15.77131157s to StartCluster
	I0318 13:54:58.189991 1157416 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.190130 1157416 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:54:58.191965 1157416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.192315 1157416 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:54:58.194158 1157416 out.go:177] * Verifying Kubernetes components...
	I0318 13:54:58.192460 1157416 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:54:58.192549 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:54:58.194270 1157416 addons.go:69] Setting storage-provisioner=true in profile "no-preload-537236"
	I0318 13:54:58.195604 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:54:58.195628 1157416 addons.go:234] Setting addon storage-provisioner=true in "no-preload-537236"
	W0318 13:54:58.195646 1157416 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:54:58.194275 1157416 addons.go:69] Setting default-storageclass=true in profile "no-preload-537236"
	I0318 13:54:58.195741 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.195748 1157416 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-537236"
	I0318 13:54:58.194278 1157416 addons.go:69] Setting metrics-server=true in profile "no-preload-537236"
	I0318 13:54:58.195816 1157416 addons.go:234] Setting addon metrics-server=true in "no-preload-537236"
	W0318 13:54:58.195835 1157416 addons.go:243] addon metrics-server should already be in state true
	I0318 13:54:58.195864 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.196133 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196177 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196187 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196224 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196236 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196256 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.218212 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0318 13:54:58.218703 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34827
	I0318 13:54:58.218934 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0318 13:54:58.219717 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.219858 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220143 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220417 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220443 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220478 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220497 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220628 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220650 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220882 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220950 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220973 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.221491 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.221527 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.221736 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.222116 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.222138 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.226247 1157416 addons.go:234] Setting addon default-storageclass=true in "no-preload-537236"
	W0318 13:54:58.226271 1157416 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:54:58.226303 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.226691 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.226719 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.238772 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0318 13:54:58.239288 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.239925 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.239954 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.240375 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.240581 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.241297 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0318 13:54:58.241774 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.242300 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.242321 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.242787 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.243001 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.243033 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.245371 1157416 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:54:58.245038 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.246964 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:54:58.246981 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:54:58.246429 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0318 13:54:58.247010 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.248738 1157416 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:54:54.143902 1157263 pod_ready.go:81] duration metric: took 4m0.000627482s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:54.143947 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:54.143967 1157263 pod_ready.go:38] duration metric: took 4m9.565422592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:54.143994 1157263 kubeadm.go:591] duration metric: took 4m17.754456341s to restartPrimaryControlPlane
	W0318 13:54:54.144061 1157263 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:54.144092 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:58.247424 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.250418 1157416 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.250441 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:54:58.250459 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.250666 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.250683 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.250733 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251012 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.251354 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.251384 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251730 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.252053 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.252082 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.252627 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.252823 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.252974 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.253647 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254073 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.254102 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254393 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.254599 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.254720 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.254858 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.275785 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0318 13:54:58.276467 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.277007 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.277037 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.277396 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.277594 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.279419 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.279699 1157416 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.279719 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:54:58.279740 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.282813 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283168 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.283198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283319 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.283505 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.283643 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.283826 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.433881 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:54:58.466338 1157416 node_ready.go:35] waiting up to 6m0s for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485186 1157416 node_ready.go:49] node "no-preload-537236" has status "Ready":"True"
	I0318 13:54:58.485217 1157416 node_ready.go:38] duration metric: took 18.833477ms for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485230 1157416 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:58.527030 1157416 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545133 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.545175 1157416 pod_ready.go:81] duration metric: took 18.11215ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545191 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560108 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.560144 1157416 pod_ready.go:81] duration metric: took 14.943161ms for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560159 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.562894 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:54:58.562924 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:54:58.572477 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.572510 1157416 pod_ready.go:81] duration metric: took 12.342242ms for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.572523 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.594618 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.597140 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.644132 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:54:58.644166 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:54:58.734467 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:58.734499 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:54:58.760623 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:59.005259 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005305 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005668 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005692 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.005704 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005713 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005981 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005996 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.006028 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.020654 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.020682 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.022812 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.022814 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.022850 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.979647 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.382455448s)
	I0318 13:54:59.979723 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.979743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980124 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980223 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.980258 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.980281 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.980354 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980675 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980756 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.982424 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.270401 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.509719085s)
	I0318 13:55:00.270464 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.270481 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.272779 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.272794 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.272817 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.272828 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.272837 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.274705 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.274734 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.274759 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.274789 1157416 addons.go:470] Verifying addon metrics-server=true in "no-preload-537236"
	I0318 13:55:00.276931 1157416 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 13:55:00.278586 1157416 addons.go:505] duration metric: took 2.086117916s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 13:55:00.607578 1157416 pod_ready.go:92] pod "kube-proxy-6c4c5" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.607607 1157416 pod_ready.go:81] duration metric: took 2.035076209s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.607620 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626505 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.626531 1157416 pod_ready.go:81] duration metric: took 18.904572ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626540 1157416 pod_ready.go:38] duration metric: took 2.141296876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:00.626556 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:00.626612 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:00.677379 1157416 api_server.go:72] duration metric: took 2.484994048s to wait for apiserver process to appear ...
	I0318 13:55:00.677406 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:00.677426 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:55:00.694161 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:55:00.696445 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:55:00.696479 1157416 api_server.go:131] duration metric: took 19.065082ms to wait for apiserver health ...
	I0318 13:55:00.696492 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:00.707383 1157416 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:00.707417 1157416 system_pods.go:61] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:00.707421 1157416 system_pods.go:61] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:00.707425 1157416 system_pods.go:61] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:00.707429 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:00.707432 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:00.707435 1157416 system_pods.go:61] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:00.707438 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:00.707445 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:00.707450 1157416 system_pods.go:61] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:00.707459 1157416 system_pods.go:74] duration metric: took 10.96036ms to wait for pod list to return data ...
	I0318 13:55:00.707467 1157416 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:00.870267 1157416 default_sa.go:45] found service account: "default"
	I0318 13:55:00.870299 1157416 default_sa.go:55] duration metric: took 162.825175ms for default service account to be created ...
	I0318 13:55:00.870310 1157416 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:01.073950 1157416 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:01.073985 1157416 system_pods.go:89] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:01.073992 1157416 system_pods.go:89] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:01.073998 1157416 system_pods.go:89] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:01.074004 1157416 system_pods.go:89] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:01.074010 1157416 system_pods.go:89] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:01.074017 1157416 system_pods.go:89] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:01.074035 1157416 system_pods.go:89] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:01.074055 1157416 system_pods.go:89] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:01.074069 1157416 system_pods.go:89] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:01.074085 1157416 system_pods.go:126] duration metric: took 203.766894ms to wait for k8s-apps to be running ...
	I0318 13:55:01.074100 1157416 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:01.074152 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:01.091165 1157416 system_svc.go:56] duration metric: took 17.056217ms WaitForService to wait for kubelet
	I0318 13:55:01.091195 1157416 kubeadm.go:576] duration metric: took 2.898817514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:01.091224 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:01.270664 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:01.270724 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:01.270737 1157416 node_conditions.go:105] duration metric: took 179.506857ms to run NodePressure ...
	I0318 13:55:01.270750 1157416 start.go:240] waiting for startup goroutines ...
	I0318 13:55:01.270758 1157416 start.go:245] waiting for cluster config update ...
	I0318 13:55:01.270769 1157416 start.go:254] writing updated cluster config ...
	I0318 13:55:01.271069 1157416 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:01.325353 1157416 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 13:55:01.327367 1157416 out.go:177] * Done! kubectl is now configured to use "no-preload-537236" cluster and "default" namespace by default
	I0318 13:55:03.715412 1157887 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.413874479s)
	I0318 13:55:03.715519 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:03.732767 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:03.743375 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:03.753393 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:03.753414 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:03.753457 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:55:03.763226 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:03.763289 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:03.774001 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:55:03.783943 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:03.783991 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:03.794580 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.803881 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:03.803921 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.813709 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:55:03.823096 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:03.823138 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:03.832790 1157887 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:03.891459 1157887 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:03.891672 1157887 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:04.056923 1157887 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:04.057055 1157887 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:04.057197 1157887 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:04.312932 1157887 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:04.314955 1157887 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:04.315063 1157887 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:04.315156 1157887 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:04.315286 1157887 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:04.315388 1157887 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:04.315490 1157887 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:04.315568 1157887 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:04.315668 1157887 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:04.315743 1157887 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:04.315844 1157887 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:04.315969 1157887 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:04.316034 1157887 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:04.316108 1157887 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:04.643155 1157887 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:04.927731 1157887 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:05.058875 1157887 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:05.221520 1157887 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:05.221985 1157887 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:05.224297 1157887 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:05.226200 1157887 out.go:204]   - Booting up control plane ...
	I0318 13:55:05.226326 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:05.226425 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:05.226520 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:05.244878 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:05.245461 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:05.245531 1157887 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:05.388215 1157887 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:11.393083 1157887 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004356 seconds
	I0318 13:55:11.393511 1157887 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:11.412586 1157887 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:11.939563 1157887 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:11.939844 1157887 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-569210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:12.457349 1157887 kubeadm.go:309] [bootstrap-token] Using token: z44dyw.tsw47dmn862zavdi
	I0318 13:55:12.458855 1157887 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:12.459037 1157887 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:12.466850 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:12.482822 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:12.488920 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:12.496947 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:12.507954 1157887 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:12.535337 1157887 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:12.763814 1157887 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:12.877248 1157887 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:12.878047 1157887 kubeadm.go:309] 
	I0318 13:55:12.878159 1157887 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:12.878183 1157887 kubeadm.go:309] 
	I0318 13:55:12.878291 1157887 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:12.878301 1157887 kubeadm.go:309] 
	I0318 13:55:12.878334 1157887 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:12.878432 1157887 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:12.878519 1157887 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:12.878531 1157887 kubeadm.go:309] 
	I0318 13:55:12.878603 1157887 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:12.878615 1157887 kubeadm.go:309] 
	I0318 13:55:12.878690 1157887 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:12.878703 1157887 kubeadm.go:309] 
	I0318 13:55:12.878762 1157887 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:12.878858 1157887 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:12.878974 1157887 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:12.878985 1157887 kubeadm.go:309] 
	I0318 13:55:12.879087 1157887 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:12.879164 1157887 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:12.879171 1157887 kubeadm.go:309] 
	I0318 13:55:12.879275 1157887 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879410 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:12.879464 1157887 kubeadm.go:309] 	--control-plane 
	I0318 13:55:12.879484 1157887 kubeadm.go:309] 
	I0318 13:55:12.879576 1157887 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:12.879586 1157887 kubeadm.go:309] 
	I0318 13:55:12.879719 1157887 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879871 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:12.883383 1157887 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:12.883432 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:55:12.883447 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:12.885248 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:12.886708 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:12.929444 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:13.043416 1157887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:13.043541 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.043567 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-569210 minikube.k8s.io/updated_at=2024_03_18T13_55_13_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=default-k8s-diff-port-569210 minikube.k8s.io/primary=true
	I0318 13:55:13.064927 1157887 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:13.286093 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.786780 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.286728 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.786442 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.287103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.786443 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.287138 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.113672 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:16.113963 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:16.787069 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.286490 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.786317 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.286840 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.786872 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.286911 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.786554 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.286216 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.786282 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.286590 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.787103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.286966 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.786928 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.286275 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.786464 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.286791 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.787028 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.938400 1157887 kubeadm.go:1107] duration metric: took 11.894943444s to wait for elevateKubeSystemPrivileges
	W0318 13:55:24.938440 1157887 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:24.938448 1157887 kubeadm.go:393] duration metric: took 5m12.933246555s to StartCluster
	I0318 13:55:24.938470 1157887 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.938621 1157887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:24.940984 1157887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.941286 1157887 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:24.943151 1157887 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:24.941329 1157887 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:24.941469 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:24.944770 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:24.944780 1157887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944830 1157887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.944845 1157887 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:24.944846 1157887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944851 1157887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944880 1157887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:24.944888 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	W0318 13:55:24.944897 1157887 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:24.944927 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.944881 1157887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-569210"
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945350 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945375 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945400 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945460 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.963173 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0318 13:55:24.963820 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.964695 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.964725 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.965120 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.965696 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.965735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.965976 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0318 13:55:24.966207 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0318 13:55:24.966502 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.966598 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.967058 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967062 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967083 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967100 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967467 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967603 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967671 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.968107 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.968146 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.971673 1157887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.971696 1157887 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:24.971729 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.972091 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.972129 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.986041 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0318 13:55:24.986481 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.986989 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.987009 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.987352 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.987605 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0318 13:55:24.987613 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.988061 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.988481 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.988499 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.988904 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.989082 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.989785 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.992033 1157887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:24.990673 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.991225 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0318 13:55:24.993532 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:24.993557 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:24.993587 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.995449 1157887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:24.994077 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.996749 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997153 1157887 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:24.997171 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:24.997191 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.997431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:24.997463 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:24.997466 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997665 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.997684 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.997746 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:24.998183 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.998273 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:24.998497 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:24.998701 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.998735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.999951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.000454 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000676 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.000865 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.001021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.001160 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.016442 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
	I0318 13:55:25.016827 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:25.017300 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:25.017328 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:25.017686 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:25.017906 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:25.019440 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:25.019694 1157887 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.019711 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:25.019731 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:25.022079 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022370 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.022398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022497 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.022645 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.022762 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.022937 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.188474 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:25.208092 1157887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218757 1157887 node_ready.go:49] node "default-k8s-diff-port-569210" has status "Ready":"True"
	I0318 13:55:25.218789 1157887 node_ready.go:38] duration metric: took 10.658955ms for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218829 1157887 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:25.224381 1157887 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235938 1157887 pod_ready.go:92] pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.235962 1157887 pod_ready.go:81] duration metric: took 11.550686ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235971 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.242985 1157887 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.243014 1157887 pod_ready.go:81] duration metric: took 7.034818ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.243027 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255777 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.255801 1157887 pod_ready.go:81] duration metric: took 12.766918ms for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255811 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.301824 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:25.301846 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:25.330301 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:25.348473 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:25.348500 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:25.365746 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.398074 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:25.398099 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:25.423951 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:27.292115 1157887 pod_ready.go:92] pod "kube-proxy-2pp8z" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.292202 1157887 pod_ready.go:81] duration metric: took 2.036383518s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.292227 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299705 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.299732 1157887 pod_ready.go:81] duration metric: took 7.486631ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299743 1157887 pod_ready.go:38] duration metric: took 2.08090143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:27.299762 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:27.299824 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:27.706241 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.375885124s)
	I0318 13:55:27.706314 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706326 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706330 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.340547601s)
	I0318 13:55:27.706377 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706392 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706630 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.282631636s)
	I0318 13:55:27.706900 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.706828 1157887 api_server.go:72] duration metric: took 2.765497711s to wait for apiserver process to appear ...
	I0318 13:55:27.706940 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:27.706879 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.706979 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.706996 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707024 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706916 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706985 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:55:27.707343 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707366 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707372 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.707405 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707417 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707426 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707455 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.707682 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707696 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707706 1157887 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:27.708614 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.708664 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.708694 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.708783 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.709092 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.709151 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.709175 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.718110 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:55:27.719497 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:27.719518 1157887 api_server.go:131] duration metric: took 12.563372ms to wait for apiserver health ...
	I0318 13:55:27.719526 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:27.739882 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.739914 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.740263 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.740296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.740318 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.742102 1157887 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0318 13:55:27.368024 1157263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.223901258s)
	I0318 13:55:27.368118 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.388474 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:27.402749 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:27.417121 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:27.417184 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:27.417235 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:27.429920 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:27.429997 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:27.442468 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:27.454842 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:27.454913 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:27.467911 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.480201 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:27.480272 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.496430 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:27.512020 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:27.512092 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:27.528102 1157263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:27.601072 1157263 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:27.601235 1157263 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:27.796445 1157263 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:27.796574 1157263 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:27.796730 1157263 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:28.079026 1157263 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:27.743429 1157887 addons.go:505] duration metric: took 2.802098895s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0318 13:55:27.744694 1157887 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:27.744727 1157887 system_pods.go:61] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.744733 1157887 system_pods.go:61] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.744738 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.744744 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.744750 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.744756 1157887 system_pods.go:61] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.744764 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.744777 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.744783 1157887 system_pods.go:61] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending
	I0318 13:55:27.744797 1157887 system_pods.go:74] duration metric: took 25.264322ms to wait for pod list to return data ...
	I0318 13:55:27.744810 1157887 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:27.755398 1157887 default_sa.go:45] found service account: "default"
	I0318 13:55:27.755427 1157887 default_sa.go:55] duration metric: took 10.607153ms for default service account to be created ...
	I0318 13:55:27.755439 1157887 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:27.815477 1157887 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:27.815507 1157887 system_pods.go:89] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.815512 1157887 system_pods.go:89] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.815517 1157887 system_pods.go:89] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.815521 1157887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.815526 1157887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.815529 1157887 system_pods.go:89] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.815533 1157887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.815540 1157887 system_pods.go:89] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.815546 1157887 system_pods.go:89] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:27.815557 1157887 system_pods.go:126] duration metric: took 60.111832ms to wait for k8s-apps to be running ...
	I0318 13:55:27.815566 1157887 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:27.815610 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.834266 1157887 system_svc.go:56] duration metric: took 18.687554ms WaitForService to wait for kubelet
	I0318 13:55:27.834304 1157887 kubeadm.go:576] duration metric: took 2.892974502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:27.834345 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:28.013031 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:28.013095 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:28.013148 1157887 node_conditions.go:105] duration metric: took 178.79502ms to run NodePressure ...
	I0318 13:55:28.013169 1157887 start.go:240] waiting for startup goroutines ...
	I0318 13:55:28.013181 1157887 start.go:245] waiting for cluster config update ...
	I0318 13:55:28.013199 1157887 start.go:254] writing updated cluster config ...
	I0318 13:55:28.013519 1157887 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:28.092810 1157887 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:28.095783 1157887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-569210" cluster and "default" namespace by default
	I0318 13:55:28.080939 1157263 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:28.081056 1157263 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:28.081145 1157263 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:28.081249 1157263 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:28.082078 1157263 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:28.082860 1157263 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:28.083397 1157263 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:28.084597 1157263 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:28.084941 1157263 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:28.085603 1157263 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:28.086461 1157263 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:28.087265 1157263 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:28.087343 1157263 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:28.348996 1157263 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:28.516513 1157263 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:28.585513 1157263 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:28.817150 1157263 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:28.817900 1157263 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:28.820280 1157263 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:28.822114 1157263 out.go:204]   - Booting up control plane ...
	I0318 13:55:28.822217 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:28.822811 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:28.825310 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:28.845906 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:28.847013 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:28.847069 1157263 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:28.992421 1157263 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:35.495384 1157263 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502688 seconds
	I0318 13:55:35.495578 1157263 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:35.517088 1157263 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:36.049915 1157263 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:36.050163 1157263 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-173036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:36.571450 1157263 kubeadm.go:309] [bootstrap-token] Using token: a1fi6l.v36l7wrnalucsepl
	I0318 13:55:36.573263 1157263 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:36.573448 1157263 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:36.581322 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:36.594853 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:36.598538 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:36.602430 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:36.605534 1157263 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:36.621332 1157263 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:36.865518 1157263 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:36.990015 1157263 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:36.991079 1157263 kubeadm.go:309] 
	I0318 13:55:36.991168 1157263 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:36.991181 1157263 kubeadm.go:309] 
	I0318 13:55:36.991288 1157263 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:36.991299 1157263 kubeadm.go:309] 
	I0318 13:55:36.991320 1157263 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:36.991395 1157263 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:36.991475 1157263 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:36.991494 1157263 kubeadm.go:309] 
	I0318 13:55:36.991572 1157263 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:36.991581 1157263 kubeadm.go:309] 
	I0318 13:55:36.991646 1157263 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:36.991658 1157263 kubeadm.go:309] 
	I0318 13:55:36.991737 1157263 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:36.991839 1157263 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:36.991954 1157263 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:36.991966 1157263 kubeadm.go:309] 
	I0318 13:55:36.992073 1157263 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:36.992174 1157263 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:36.992186 1157263 kubeadm.go:309] 
	I0318 13:55:36.992304 1157263 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992477 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:36.992522 1157263 kubeadm.go:309] 	--control-plane 
	I0318 13:55:36.992532 1157263 kubeadm.go:309] 
	I0318 13:55:36.992642 1157263 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:36.992656 1157263 kubeadm.go:309] 
	I0318 13:55:36.992769 1157263 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992922 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:36.994542 1157263 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:36.994648 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:55:36.994660 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:36.996526 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:36.997929 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:37.047757 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:37.075078 1157263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:37.075167 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.075199 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-173036 minikube.k8s.io/updated_at=2024_03_18T13_55_37_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=embed-certs-173036 minikube.k8s.io/primary=true
	I0318 13:55:37.236857 1157263 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:37.422453 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.922622 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.423527 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.922743 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.422721 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.923438 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.422599 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.923170 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.422812 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.922526 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.422594 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.922835 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.423479 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.923114 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.422672 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.922883 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.422863 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.922770 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.423473 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.923125 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.423378 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.923366 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.422566 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.923231 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.422505 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.554542 1157263 kubeadm.go:1107] duration metric: took 12.479441091s to wait for elevateKubeSystemPrivileges
	W0318 13:55:49.554590 1157263 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:49.554602 1157263 kubeadm.go:393] duration metric: took 5m13.226983757s to StartCluster
	I0318 13:55:49.554626 1157263 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.554778 1157263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:49.556962 1157263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.557273 1157263 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:49.558774 1157263 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:49.557321 1157263 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:49.557488 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:49.560195 1157263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-173036"
	I0318 13:55:49.560201 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:49.560211 1157263 addons.go:69] Setting metrics-server=true in profile "embed-certs-173036"
	I0318 13:55:49.560237 1157263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-173036"
	I0318 13:55:49.560247 1157263 addons.go:234] Setting addon metrics-server=true in "embed-certs-173036"
	W0318 13:55:49.560254 1157263 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:49.560201 1157263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-173036"
	I0318 13:55:49.560282 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560302 1157263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-173036"
	W0318 13:55:49.560317 1157263 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:49.560388 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560644 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560676 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560678 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560716 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560777 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560803 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.577682 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0318 13:55:49.577714 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0318 13:55:49.578101 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0318 13:55:49.578261 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578285 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578493 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578880 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578907 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.578882 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578923 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579013 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.579036 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579302 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579333 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579538 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.579598 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579914 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.579955 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.580203 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.580238 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.583587 1157263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-173036"
	W0318 13:55:49.583610 1157263 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:49.583641 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.584009 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.584040 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.596862 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0318 13:55:49.597356 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.597859 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.598026 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.598110 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0318 13:55:49.598635 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599310 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.599331 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.599405 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0318 13:55:49.599732 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599874 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.600120 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.600135 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.600197 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.600439 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.601019 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.601052 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.602172 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.604115 1157263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:49.606034 1157263 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.606049 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:49.606065 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.603277 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.606323 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.608600 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.610213 1157263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:49.611511 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:49.611531 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:49.611545 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.609758 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.611598 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.611613 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.610550 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.611727 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.611868 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.611991 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.614689 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.615322 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.615403 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615531 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.615672 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.615773 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.620257 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0318 13:55:49.620653 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.621225 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.621243 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.621610 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.621790 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.623303 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.623566 1157263 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:49.623580 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:49.623594 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.626325 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.626733 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.626755 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.627028 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.627196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.627335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.627441 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.791524 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:49.847829 1157263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860595 1157263 node_ready.go:49] node "embed-certs-173036" has status "Ready":"True"
	I0318 13:55:49.860621 1157263 node_ready.go:38] duration metric: took 12.757412ms for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860631 1157263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:49.870524 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:49.917170 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:49.917197 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:49.965845 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:49.965871 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:49.969600 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.982887 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:50.023768 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:50.023795 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:50.139120 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:51.877589 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-ft594" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:51.877618 1157263 pod_ready.go:81] duration metric: took 2.007066644s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:51.877634 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.007908 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.02498147s)
	I0318 13:55:52.007966 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.007979 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008318 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008378 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008383 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.008408 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.008427 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008713 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008853 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.009491 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.039858476s)
	I0318 13:55:52.009567 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.009595 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010239 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010242 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010276 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.010289 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.010301 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010553 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010568 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010578 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.026035 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.026056 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.026364 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.026385 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.202596 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.063427726s)
	I0318 13:55:52.202663 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.202686 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.202999 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203021 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203032 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.203040 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.203321 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203338 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203352 1157263 addons.go:470] Verifying addon metrics-server=true in "embed-certs-173036"
	I0318 13:55:52.205372 1157263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 13:55:52.207184 1157263 addons.go:505] duration metric: took 2.649872416s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 13:55:52.391839 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.391878 1157263 pod_ready.go:81] duration metric: took 514.235543ms for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.391891 1157263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398044 1157263 pod_ready.go:92] pod "etcd-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.398075 1157263 pod_ready.go:81] duration metric: took 6.176672ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398091 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403790 1157263 pod_ready.go:92] pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.403809 1157263 pod_ready.go:81] duration metric: took 5.70927ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403817 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414956 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.414976 1157263 pod_ready.go:81] duration metric: took 11.153442ms for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414986 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674125 1157263 pod_ready.go:92] pod "kube-proxy-lp9mc" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.674151 1157263 pod_ready.go:81] duration metric: took 259.158776ms for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674160 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075385 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:53.075420 1157263 pod_ready.go:81] duration metric: took 401.251175ms for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075432 1157263 pod_ready.go:38] duration metric: took 3.214790175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:53.075452 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:53.075523 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:53.092916 1157263 api_server.go:72] duration metric: took 3.53560403s to wait for apiserver process to appear ...
	I0318 13:55:53.092948 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:53.093027 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:55:53.098715 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:55:53.100073 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:53.100102 1157263 api_server.go:131] duration metric: took 7.134408ms to wait for apiserver health ...
	I0318 13:55:53.100113 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:53.278961 1157263 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:53.278993 1157263 system_pods.go:61] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.278998 1157263 system_pods.go:61] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.279002 1157263 system_pods.go:61] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.279005 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.279010 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.279013 1157263 system_pods.go:61] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.279017 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.279023 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.279026 1157263 system_pods.go:61] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.279037 1157263 system_pods.go:74] duration metric: took 178.915393ms to wait for pod list to return data ...
	I0318 13:55:53.279047 1157263 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:53.475094 1157263 default_sa.go:45] found service account: "default"
	I0318 13:55:53.475123 1157263 default_sa.go:55] duration metric: took 196.069593ms for default service account to be created ...
	I0318 13:55:53.475133 1157263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:53.678384 1157263 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:53.678413 1157263 system_pods.go:89] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.678418 1157263 system_pods.go:89] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.678422 1157263 system_pods.go:89] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.678427 1157263 system_pods.go:89] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.678431 1157263 system_pods.go:89] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.678436 1157263 system_pods.go:89] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.678439 1157263 system_pods.go:89] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.678447 1157263 system_pods.go:89] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.678455 1157263 system_pods.go:89] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.678464 1157263 system_pods.go:126] duration metric: took 203.32588ms to wait for k8s-apps to be running ...
	I0318 13:55:53.678473 1157263 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:53.678531 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:53.698244 1157263 system_svc.go:56] duration metric: took 19.758793ms WaitForService to wait for kubelet
	I0318 13:55:53.698279 1157263 kubeadm.go:576] duration metric: took 4.140974066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:53.698307 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:53.876137 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:53.876162 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:53.876173 1157263 node_conditions.go:105] duration metric: took 177.861272ms to run NodePressure ...
	I0318 13:55:53.876184 1157263 start.go:240] waiting for startup goroutines ...
	I0318 13:55:53.876191 1157263 start.go:245] waiting for cluster config update ...
	I0318 13:55:53.876202 1157263 start.go:254] writing updated cluster config ...
	I0318 13:55:53.876907 1157263 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:53.931596 1157263 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:53.933499 1157263 out.go:177] * Done! kubectl is now configured to use "embed-certs-173036" cluster and "default" namespace by default
	I0318 13:55:56.115397 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:56.115674 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:56.115714 1157708 kubeadm.go:309] 
	I0318 13:55:56.115782 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:55:56.115840 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:55:56.115849 1157708 kubeadm.go:309] 
	I0318 13:55:56.115908 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:55:56.115979 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:55:56.116102 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:55:56.116112 1157708 kubeadm.go:309] 
	I0318 13:55:56.116242 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:55:56.116289 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:55:56.116349 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:55:56.116370 1157708 kubeadm.go:309] 
	I0318 13:55:56.116506 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:55:56.116645 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:55:56.116665 1157708 kubeadm.go:309] 
	I0318 13:55:56.116804 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:55:56.116897 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:55:56.117005 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:55:56.117094 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:55:56.117110 1157708 kubeadm.go:309] 
	I0318 13:55:56.117680 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:56.117813 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:55:56.117934 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 13:55:56.118052 1157708 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 13:55:56.118124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:55:57.920938 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.802776126s)
	I0318 13:55:57.921031 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:57.939226 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:57.952304 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:57.952342 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:57.952404 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:57.964632 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:57.964695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:57.977306 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:57.989728 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:57.989790 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:58.001661 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.013078 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:58.013160 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.024891 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:58.036171 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:58.036225 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:58.048156 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:58.128356 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:55:58.128445 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:58.297704 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:58.297897 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:58.298048 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:58.515521 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:58.517569 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:58.517679 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:58.517760 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:58.517830 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:58.517908 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:58.517980 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:58.518047 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:58.518280 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:58.519078 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:58.520081 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:58.521268 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:58.521861 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:58.521936 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:58.762418 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:58.999746 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:59.214448 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:59.402662 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:59.421555 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:59.423151 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:59.423233 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:59.560412 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:59.563125 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:55:59.563274 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:59.571364 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:59.572936 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:59.573987 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:59.586689 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:56:39.588627 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:56:39.588942 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:39.589128 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:44.589564 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:44.589852 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:54.590311 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:54.590619 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:14.591571 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:14.591866 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594170 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:54.594433 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594448 1157708 kubeadm.go:309] 
	I0318 13:57:54.594490 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:57:54.594540 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:57:54.594549 1157708 kubeadm.go:309] 
	I0318 13:57:54.594594 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:57:54.594641 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:57:54.594800 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:57:54.594811 1157708 kubeadm.go:309] 
	I0318 13:57:54.594950 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:57:54.595000 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:57:54.595046 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:57:54.595056 1157708 kubeadm.go:309] 
	I0318 13:57:54.595163 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:57:54.595297 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:57:54.595312 1157708 kubeadm.go:309] 
	I0318 13:57:54.595471 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:57:54.595605 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:57:54.595716 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:57:54.595812 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:57:54.595827 1157708 kubeadm.go:309] 
	I0318 13:57:54.596636 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:57:54.596805 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:57:54.596972 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 13:57:54.597014 1157708 kubeadm.go:393] duration metric: took 8m1.551231902s to StartCluster
	I0318 13:57:54.597076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:57:54.597174 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:57:54.649451 1157708 cri.go:89] found id: ""
	I0318 13:57:54.649484 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.649496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:57:54.649506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:57:54.649577 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:57:54.692278 1157708 cri.go:89] found id: ""
	I0318 13:57:54.692317 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.692339 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:57:54.692349 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:57:54.692427 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:57:54.731034 1157708 cri.go:89] found id: ""
	I0318 13:57:54.731062 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.731071 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:57:54.731077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:57:54.731135 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:57:54.769883 1157708 cri.go:89] found id: ""
	I0318 13:57:54.769913 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.769923 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:57:54.769931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:57:54.769996 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:57:54.808620 1157708 cri.go:89] found id: ""
	I0318 13:57:54.808648 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.808656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:57:54.808661 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:57:54.808715 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:57:54.849207 1157708 cri.go:89] found id: ""
	I0318 13:57:54.849245 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.849256 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:57:54.849264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:57:54.849334 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:57:54.918479 1157708 cri.go:89] found id: ""
	I0318 13:57:54.918508 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.918520 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:57:54.918528 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:57:54.918597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:57:54.958828 1157708 cri.go:89] found id: ""
	I0318 13:57:54.958861 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.958871 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:57:54.958887 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:57:54.958906 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:57:55.078045 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:57:55.078092 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:57:55.123043 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:57:55.123077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:57:55.180480 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:57:55.180518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:57:55.197264 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:57:55.197316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:57:55.291264 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0318 13:57:55.291325 1157708 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 13:57:55.291395 1157708 out.go:239] * 
	W0318 13:57:55.291477 1157708 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.291502 1157708 out.go:239] * 
	W0318 13:57:55.292511 1157708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:55.295566 1157708 out.go:177] 
	W0318 13:57:55.296840 1157708 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.296903 1157708 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 13:57:55.296941 1157708 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 13:57:55.298417 1157708 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.042402576Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770696042379582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2382b188-3857-4826-bf21-978f74b82df6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.043296274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9b850bb4-4088-493b-ba1a-2a343b5a7144 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.043375534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9b850bb4-4088-493b-ba1a-2a343b5a7144 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.043603361Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e02b6a06cd9de57f8956e47d537c1911e3b686e4fe2f89b4cc0c330ec1395f50,PodSandboxId:8598ab81f3a8427b711e7a1eb9665291041e829e7396d2cead720e12bc10d1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770152507618853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37883b5-9db5-467e-9b91-40f6ea69c18e,},Annotations:map[string]string{io.kubernetes.container.hash: 95705045,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5173e72f5aab4110f03ee50da1976455c04643d418d54191acae86add65becd3,PodSandboxId:23e47667ab7843fb87da468633568353cbb824230b0d84cb4dd962b3abb2b486,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150779901140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6dw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03d9bbe-1493-44a4-be19-1e387ff6eaef,},Annotations:map[string]string{io.kubernetes.container.hash: a8b3ec08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4097ab56fa719a27b89134e102ccb754f1f194d20353b3108c29a5582927375,PodSandboxId:6317fb6e686a84ddf5476c0c417b40126f5bf2c096ad0eb7a725f6f8aa5a68ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150676922684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ft594,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6e6863a-0b5e-434e-b13c-d33e9ed15007,},Annotations:map[string]string{io.kubernetes.container.hash: 44abca6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:022a13544265e467f4f462adc1005aa84022e97ff5734ca437819670e1af1bda,PodSandboxId:6726d05ea7e5c674d0fb21521976183b86f315ea995da3d20a353c1939ca0b95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710770150053954817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp9mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0,},Annotations:map[string]string{io.kubernetes.container.hash: 260715c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc0bc6c9c31812445f19787fe9f10103cd041c632e5f3945c5fe4cb587d6e1,PodSandboxId:f065d53892d2215a73430064e91abed3fa14787a99d4bfab559b65f20111bade,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770130629900843,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d43b88f75cc44c2f6b3982f84506c72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7d40b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7d442d079ffad19b9385885b4c5a7d9eedf130c90054839c5f08a863691d34,PodSandboxId:98404eeb33987d5af87d7be090feb2210fa93f68ce1621c8cf80a44bb678eccb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710770130575394174,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1fe59f7fd07c3ccedb94350e669b24c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c62f000548b89c9e7e7de7d9dc07271f9eef1a594a4a31410f4f8c52db6e80,PodSandboxId:4e716be2db37fa0e6e908365f559d89f328efdb578f03d419e35d47262b7f700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710770130500607549,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c3964d6ce26299f6adbb6721a7ed34,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9be9273c191dc4513928989b1472edb1147c687cd9e23e33206970ae0d9feb9,PodSandboxId:e080a747a34c0158245305db6f72fc50802b05b35b1f20830d7f758acecdb974,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710770130491047482,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884409b4f61232bbd76d8c1825cec4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 248f3412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9b850bb4-4088-493b-ba1a-2a343b5a7144 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.089065678Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e6fcfe72-26dc-4796-a62c-deb1a6100dd0 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.089143335Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e6fcfe72-26dc-4796-a62c-deb1a6100dd0 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.090897762Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=82889f85-1222-46e4-bae5-3cc2ad52e1b9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.091360345Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770696091333791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=82889f85-1222-46e4-bae5-3cc2ad52e1b9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.092310647Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9fe1768e-d02e-4940-9c24-6f3a387c8901 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.092428305Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9fe1768e-d02e-4940-9c24-6f3a387c8901 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.092714854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e02b6a06cd9de57f8956e47d537c1911e3b686e4fe2f89b4cc0c330ec1395f50,PodSandboxId:8598ab81f3a8427b711e7a1eb9665291041e829e7396d2cead720e12bc10d1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770152507618853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37883b5-9db5-467e-9b91-40f6ea69c18e,},Annotations:map[string]string{io.kubernetes.container.hash: 95705045,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5173e72f5aab4110f03ee50da1976455c04643d418d54191acae86add65becd3,PodSandboxId:23e47667ab7843fb87da468633568353cbb824230b0d84cb4dd962b3abb2b486,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150779901140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6dw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03d9bbe-1493-44a4-be19-1e387ff6eaef,},Annotations:map[string]string{io.kubernetes.container.hash: a8b3ec08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4097ab56fa719a27b89134e102ccb754f1f194d20353b3108c29a5582927375,PodSandboxId:6317fb6e686a84ddf5476c0c417b40126f5bf2c096ad0eb7a725f6f8aa5a68ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150676922684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ft594,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6e6863a-0b5e-434e-b13c-d33e9ed15007,},Annotations:map[string]string{io.kubernetes.container.hash: 44abca6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:022a13544265e467f4f462adc1005aa84022e97ff5734ca437819670e1af1bda,PodSandboxId:6726d05ea7e5c674d0fb21521976183b86f315ea995da3d20a353c1939ca0b95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710770150053954817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp9mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0,},Annotations:map[string]string{io.kubernetes.container.hash: 260715c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc0bc6c9c31812445f19787fe9f10103cd041c632e5f3945c5fe4cb587d6e1,PodSandboxId:f065d53892d2215a73430064e91abed3fa14787a99d4bfab559b65f20111bade,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770130629900843,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d43b88f75cc44c2f6b3982f84506c72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7d40b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7d442d079ffad19b9385885b4c5a7d9eedf130c90054839c5f08a863691d34,PodSandboxId:98404eeb33987d5af87d7be090feb2210fa93f68ce1621c8cf80a44bb678eccb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710770130575394174,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1fe59f7fd07c3ccedb94350e669b24c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c62f000548b89c9e7e7de7d9dc07271f9eef1a594a4a31410f4f8c52db6e80,PodSandboxId:4e716be2db37fa0e6e908365f559d89f328efdb578f03d419e35d47262b7f700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710770130500607549,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c3964d6ce26299f6adbb6721a7ed34,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9be9273c191dc4513928989b1472edb1147c687cd9e23e33206970ae0d9feb9,PodSandboxId:e080a747a34c0158245305db6f72fc50802b05b35b1f20830d7f758acecdb974,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710770130491047482,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884409b4f61232bbd76d8c1825cec4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 248f3412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9fe1768e-d02e-4940-9c24-6f3a387c8901 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.139390169Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d74c5b92-cd13-4ff7-a602-d2f5ecadf1fe name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.139463991Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d74c5b92-cd13-4ff7-a602-d2f5ecadf1fe name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.143496320Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2ec3bc56-c32e-4ae2-8ff7-7e6a4d7c52f2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.144183491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770696144160604,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2ec3bc56-c32e-4ae2-8ff7-7e6a4d7c52f2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.145024709Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8a03c186-d878-4b72-8aed-356325e2fa31 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.145107519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8a03c186-d878-4b72-8aed-356325e2fa31 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.145281816Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e02b6a06cd9de57f8956e47d537c1911e3b686e4fe2f89b4cc0c330ec1395f50,PodSandboxId:8598ab81f3a8427b711e7a1eb9665291041e829e7396d2cead720e12bc10d1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770152507618853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37883b5-9db5-467e-9b91-40f6ea69c18e,},Annotations:map[string]string{io.kubernetes.container.hash: 95705045,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5173e72f5aab4110f03ee50da1976455c04643d418d54191acae86add65becd3,PodSandboxId:23e47667ab7843fb87da468633568353cbb824230b0d84cb4dd962b3abb2b486,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150779901140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6dw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03d9bbe-1493-44a4-be19-1e387ff6eaef,},Annotations:map[string]string{io.kubernetes.container.hash: a8b3ec08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4097ab56fa719a27b89134e102ccb754f1f194d20353b3108c29a5582927375,PodSandboxId:6317fb6e686a84ddf5476c0c417b40126f5bf2c096ad0eb7a725f6f8aa5a68ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150676922684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ft594,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6e6863a-0b5e-434e-b13c-d33e9ed15007,},Annotations:map[string]string{io.kubernetes.container.hash: 44abca6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:022a13544265e467f4f462adc1005aa84022e97ff5734ca437819670e1af1bda,PodSandboxId:6726d05ea7e5c674d0fb21521976183b86f315ea995da3d20a353c1939ca0b95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710770150053954817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp9mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0,},Annotations:map[string]string{io.kubernetes.container.hash: 260715c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc0bc6c9c31812445f19787fe9f10103cd041c632e5f3945c5fe4cb587d6e1,PodSandboxId:f065d53892d2215a73430064e91abed3fa14787a99d4bfab559b65f20111bade,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770130629900843,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d43b88f75cc44c2f6b3982f84506c72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7d40b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7d442d079ffad19b9385885b4c5a7d9eedf130c90054839c5f08a863691d34,PodSandboxId:98404eeb33987d5af87d7be090feb2210fa93f68ce1621c8cf80a44bb678eccb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710770130575394174,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1fe59f7fd07c3ccedb94350e669b24c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c62f000548b89c9e7e7de7d9dc07271f9eef1a594a4a31410f4f8c52db6e80,PodSandboxId:4e716be2db37fa0e6e908365f559d89f328efdb578f03d419e35d47262b7f700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710770130500607549,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c3964d6ce26299f6adbb6721a7ed34,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9be9273c191dc4513928989b1472edb1147c687cd9e23e33206970ae0d9feb9,PodSandboxId:e080a747a34c0158245305db6f72fc50802b05b35b1f20830d7f758acecdb974,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710770130491047482,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884409b4f61232bbd76d8c1825cec4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 248f3412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8a03c186-d878-4b72-8aed-356325e2fa31 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.181286375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05b8a18e-7567-4b30-a207-f161b2c1f75b name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.181360232Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05b8a18e-7567-4b30-a207-f161b2c1f75b name=/runtime.v1.RuntimeService/Version
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.182967893Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e9c534ea-209d-4e9d-ba05-15c2e8c565ca name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.183351726Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770696183331228,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e9c534ea-209d-4e9d-ba05-15c2e8c565ca name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.184023881Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17edccfd-682c-4056-9f22-a4990400d2eb name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.184094072Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17edccfd-682c-4056-9f22-a4990400d2eb name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:04:56 embed-certs-173036 crio[703]: time="2024-03-18 14:04:56.184275112Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e02b6a06cd9de57f8956e47d537c1911e3b686e4fe2f89b4cc0c330ec1395f50,PodSandboxId:8598ab81f3a8427b711e7a1eb9665291041e829e7396d2cead720e12bc10d1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770152507618853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37883b5-9db5-467e-9b91-40f6ea69c18e,},Annotations:map[string]string{io.kubernetes.container.hash: 95705045,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5173e72f5aab4110f03ee50da1976455c04643d418d54191acae86add65becd3,PodSandboxId:23e47667ab7843fb87da468633568353cbb824230b0d84cb4dd962b3abb2b486,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150779901140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6dw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03d9bbe-1493-44a4-be19-1e387ff6eaef,},Annotations:map[string]string{io.kubernetes.container.hash: a8b3ec08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4097ab56fa719a27b89134e102ccb754f1f194d20353b3108c29a5582927375,PodSandboxId:6317fb6e686a84ddf5476c0c417b40126f5bf2c096ad0eb7a725f6f8aa5a68ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150676922684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ft594,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6e6863a-0b5e-434e-b13c-d33e9ed15007,},Annotations:map[string]string{io.kubernetes.container.hash: 44abca6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:022a13544265e467f4f462adc1005aa84022e97ff5734ca437819670e1af1bda,PodSandboxId:6726d05ea7e5c674d0fb21521976183b86f315ea995da3d20a353c1939ca0b95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710770150053954817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp9mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0,},Annotations:map[string]string{io.kubernetes.container.hash: 260715c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc0bc6c9c31812445f19787fe9f10103cd041c632e5f3945c5fe4cb587d6e1,PodSandboxId:f065d53892d2215a73430064e91abed3fa14787a99d4bfab559b65f20111bade,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770130629900843,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d43b88f75cc44c2f6b3982f84506c72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7d40b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7d442d079ffad19b9385885b4c5a7d9eedf130c90054839c5f08a863691d34,PodSandboxId:98404eeb33987d5af87d7be090feb2210fa93f68ce1621c8cf80a44bb678eccb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710770130575394174,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1fe59f7fd07c3ccedb94350e669b24c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c62f000548b89c9e7e7de7d9dc07271f9eef1a594a4a31410f4f8c52db6e80,PodSandboxId:4e716be2db37fa0e6e908365f559d89f328efdb578f03d419e35d47262b7f700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710770130500607549,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c3964d6ce26299f6adbb6721a7ed34,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9be9273c191dc4513928989b1472edb1147c687cd9e23e33206970ae0d9feb9,PodSandboxId:e080a747a34c0158245305db6f72fc50802b05b35b1f20830d7f758acecdb974,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710770130491047482,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884409b4f61232bbd76d8c1825cec4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 248f3412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17edccfd-682c-4056-9f22-a4990400d2eb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e02b6a06cd9de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   8598ab81f3a84       storage-provisioner
	5173e72f5aab4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   23e47667ab784       coredns-5dd5756b68-p6dw8
	e4097ab56fa71       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   6317fb6e686a8       coredns-5dd5756b68-ft594
	022a13544265e       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   6726d05ea7e5c       kube-proxy-lp9mc
	b6cc0bc6c9c31       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   f065d53892d22       etcd-embed-certs-173036
	6a7d442d079ff       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   98404eeb33987       kube-scheduler-embed-certs-173036
	53c62f000548b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   4e716be2db37f       kube-controller-manager-embed-certs-173036
	f9be9273c191d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   e080a747a34c0       kube-apiserver-embed-certs-173036
	
	
	==> coredns [5173e72f5aab4110f03ee50da1976455c04643d418d54191acae86add65becd3] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [e4097ab56fa719a27b89134e102ccb754f1f194d20353b3108c29a5582927375] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-173036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-173036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=embed-certs-173036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_55_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:55:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-173036
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:04:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:01:04 +0000   Mon, 18 Mar 2024 13:55:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:01:04 +0000   Mon, 18 Mar 2024 13:55:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:01:04 +0000   Mon, 18 Mar 2024 13:55:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:01:04 +0000   Mon, 18 Mar 2024 13:55:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.191
	  Hostname:    embed-certs-173036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ab4d497731f64ae3afe9166b2b2e858b
	  System UUID:                ab4d4977-31f6-4ae3-afe9-166b2b2e858b
	  Boot ID:                    9b16ec79-d866-4ab9-9745-eea184e72bf3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-ft594                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-5dd5756b68-p6dw8                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-173036                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-embed-certs-173036             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-173036    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-lp9mc                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-173036             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 metrics-server-57f55c9bc5-vzv79               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node embed-certs-173036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node embed-certs-173036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node embed-certs-173036 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m19s  kubelet          Node embed-certs-173036 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m19s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m9s   kubelet          Node embed-certs-173036 status is now: NodeReady
	  Normal  RegisteredNode           9m8s   node-controller  Node embed-certs-173036 event: Registered Node embed-certs-173036 in Controller
	
	
	==> dmesg <==
	[  +0.059560] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050095] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.869450] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.643233] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.762153] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.316194] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.056970] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066687] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.186881] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.160086] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.303130] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +5.700271] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.063914] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.149728] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +5.642199] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.612247] kauditd_printk_skb: 72 callbacks suppressed
	[Mar18 13:55] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.945332] systemd-fstab-generator[3416]: Ignoring "noauto" option for root device
	[  +7.793308] systemd-fstab-generator[3740]: Ignoring "noauto" option for root device
	[  +0.080373] kauditd_printk_skb: 57 callbacks suppressed
	[ +12.897004] systemd-fstab-generator[3939]: Ignoring "noauto" option for root device
	[  +0.110833] kauditd_printk_skb: 12 callbacks suppressed
	[Mar18 13:56] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [b6cc0bc6c9c31812445f19787fe9f10103cd041c632e5f3945c5fe4cb587d6e1] <==
	{"level":"info","ts":"2024-03-18T13:55:31.023579Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.191:2380"}
	{"level":"info","ts":"2024-03-18T13:55:31.027809Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.191:2380"}
	{"level":"info","ts":"2024-03-18T13:55:31.027793Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cf6b350098c86d90 switched to configuration voters=(14946098065038667152)"}
	{"level":"info","ts":"2024-03-18T13:55:31.027983Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"62aab75006ec8f54","local-member-id":"cf6b350098c86d90","added-peer-id":"cf6b350098c86d90","added-peer-peer-urls":["https://192.168.50.191:2380"]}
	{"level":"info","ts":"2024-03-18T13:55:31.025598Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:55:31.030574Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:55:31.030688Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-18T13:55:31.283597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cf6b350098c86d90 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-18T13:55:31.28369Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cf6b350098c86d90 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-18T13:55:31.283735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cf6b350098c86d90 received MsgPreVoteResp from cf6b350098c86d90 at term 1"}
	{"level":"info","ts":"2024-03-18T13:55:31.283764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cf6b350098c86d90 became candidate at term 2"}
	{"level":"info","ts":"2024-03-18T13:55:31.283788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cf6b350098c86d90 received MsgVoteResp from cf6b350098c86d90 at term 2"}
	{"level":"info","ts":"2024-03-18T13:55:31.283815Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"cf6b350098c86d90 became leader at term 2"}
	{"level":"info","ts":"2024-03-18T13:55:31.28384Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: cf6b350098c86d90 elected leader cf6b350098c86d90 at term 2"}
	{"level":"info","ts":"2024-03-18T13:55:31.287744Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:55:31.291752Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"cf6b350098c86d90","local-member-attributes":"{Name:embed-certs-173036 ClientURLs:[https://192.168.50.191:2379]}","request-path":"/0/members/cf6b350098c86d90/attributes","cluster-id":"62aab75006ec8f54","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:55:31.291677Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"62aab75006ec8f54","local-member-id":"cf6b350098c86d90","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:55:31.291959Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:55:31.291996Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:55:31.292024Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:55:31.299105Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.191:2379"}
	{"level":"info","ts":"2024-03-18T13:55:31.303654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:55:31.309468Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:55:31.304649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:55:31.34258Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:04:56 up 14 min,  0 users,  load average: 0.02, 0.12, 0.11
	Linux embed-certs-173036 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f9be9273c191dc4513928989b1472edb1147c687cd9e23e33206970ae0d9feb9] <==
	W0318 14:00:34.733904       1 handler_proxy.go:93] no RequestInfo found in the context
	W0318 14:00:34.734054       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:00:34.734123       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:00:34.734140       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0318 14:00:34.734193       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:00:34.735429       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:01:33.604970       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:01:34.735125       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:01:34.735279       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:01:34.735310       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:01:34.736361       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:01:34.736482       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:01:34.736497       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:02:33.605351       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 14:03:33.605056       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:03:34.735712       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:03:34.735892       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:03:34.736017       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:03:34.736760       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:03:34.736921       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:03:34.737087       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:04:33.605030       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [53c62f000548b89c9e7e7de7d9dc07271f9eef1a594a4a31410f4f8c52db6e80] <==
	I0318 13:59:20.043270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="139.644µs"
	E0318 13:59:48.731806       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 13:59:49.206265       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:00:18.738791       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:00:19.215268       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:00:48.745823       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:00:49.223863       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:01:18.751623       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:01:19.232335       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:01:48.757369       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:01:49.241218       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:01:52.041920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="433.227µs"
	I0318 14:02:05.044729       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="283.061µs"
	E0318 14:02:18.763683       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:02:19.249471       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:02:48.769730       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:02:49.258425       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:03:18.775605       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:03:19.268811       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:03:48.782998       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:03:49.278137       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:04:18.789358       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:04:19.286750       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:04:48.795825       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:04:49.295611       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [022a13544265e467f4f462adc1005aa84022e97ff5734ca437819670e1af1bda] <==
	I0318 13:55:51.064135       1 server_others.go:69] "Using iptables proxy"
	I0318 13:55:51.424719       1 node.go:141] Successfully retrieved node IP: 192.168.50.191
	I0318 13:55:51.606939       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:55:51.606994       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:55:51.610882       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:55:51.612260       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:55:51.612474       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:55:51.612610       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:55:51.616069       1 config.go:315] "Starting node config controller"
	I0318 13:55:51.617201       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:55:51.623684       1 config.go:188] "Starting service config controller"
	I0318 13:55:51.623694       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:55:51.623710       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:55:51.623713       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:55:51.717744       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:55:51.723828       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:55:51.723863       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6a7d442d079ffad19b9385885b4c5a7d9eedf130c90054839c5f08a863691d34] <==
	W0318 13:55:33.755810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:55:33.755818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:55:34.578006       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:55:34.578151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 13:55:34.670886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 13:55:34.670938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 13:55:34.711898       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 13:55:34.711955       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 13:55:34.735975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:55:34.736104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:55:34.822295       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 13:55:34.822384       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 13:55:34.902487       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:55:34.902626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 13:55:34.910155       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 13:55:34.910300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 13:55:34.962110       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:55:34.962241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:55:35.007697       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:55:35.007811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:55:35.038237       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:55:35.038617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:55:35.259434       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:55:35.259639       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:55:38.144473       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:02:37 embed-certs-173036 kubelet[3747]: E0318 14:02:37.112687    3747 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:02:37 embed-certs-173036 kubelet[3747]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:02:37 embed-certs-173036 kubelet[3747]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:02:37 embed-certs-173036 kubelet[3747]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:02:37 embed-certs-173036 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:02:48 embed-certs-173036 kubelet[3747]: E0318 14:02:48.023928    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:03:02 embed-certs-173036 kubelet[3747]: E0318 14:03:02.024068    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:03:16 embed-certs-173036 kubelet[3747]: E0318 14:03:16.024076    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:03:31 embed-certs-173036 kubelet[3747]: E0318 14:03:31.024970    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:03:37 embed-certs-173036 kubelet[3747]: E0318 14:03:37.109973    3747 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:03:37 embed-certs-173036 kubelet[3747]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:03:37 embed-certs-173036 kubelet[3747]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:03:37 embed-certs-173036 kubelet[3747]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:03:37 embed-certs-173036 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:03:46 embed-certs-173036 kubelet[3747]: E0318 14:03:46.024405    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:04:01 embed-certs-173036 kubelet[3747]: E0318 14:04:01.024935    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:04:15 embed-certs-173036 kubelet[3747]: E0318 14:04:15.024964    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:04:26 embed-certs-173036 kubelet[3747]: E0318 14:04:26.024178    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:04:37 embed-certs-173036 kubelet[3747]: E0318 14:04:37.111193    3747 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:04:37 embed-certs-173036 kubelet[3747]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:04:37 embed-certs-173036 kubelet[3747]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:04:37 embed-certs-173036 kubelet[3747]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:04:37 embed-certs-173036 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:04:39 embed-certs-173036 kubelet[3747]: E0318 14:04:39.023925    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:04:50 embed-certs-173036 kubelet[3747]: E0318 14:04:50.023977    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	
	
	==> storage-provisioner [e02b6a06cd9de57f8956e47d537c1911e3b686e4fe2f89b4cc0c330ec1395f50] <==
	I0318 13:55:52.654203       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 13:55:52.667874       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 13:55:52.668004       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 13:55:52.679133       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 13:55:52.679787       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"91f8549e-c7c4-4c23-8b09-71ff2f50ff8e", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-173036_4b411dd9-13be-42e9-9d2c-c400698f3785 became leader
	I0318 13:55:52.679906       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-173036_4b411dd9-13be-42e9-9d2c-c400698f3785!
	I0318 13:55:52.780705       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-173036_4b411dd9-13be-42e9-9d2c-c400698f3785!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-173036 -n embed-certs-173036
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-173036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vzv79
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-173036 describe pod metrics-server-57f55c9bc5-vzv79
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-173036 describe pod metrics-server-57f55c9bc5-vzv79: exit status 1 (68.790807ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vzv79" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-173036 describe pod metrics-server-57f55c9bc5-vzv79: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
E0318 13:59:30.296843 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
E0318 14:01:24.904769 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
E0318 14:04:30.297009 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
E0318 14:06:24.904825 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-909137 -n old-k8s-version-909137
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 2 (263.639157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-909137" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 2 (255.600631ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-909137 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-909137 logs -n 25: (1.620517961s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-599578                           | kubernetes-upgrade-599578    | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:39 UTC |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-760389                                        | pause-760389                 | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:40 UTC |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-173866 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | disable-driver-mounts-173866                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:42 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-173036            | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-537236             | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC | 18 Mar 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-569210  | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC | 18 Mar 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-909137        | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-173036                 | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-537236                  | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-909137             | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-569210       | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:55 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:45:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:45:41.667747 1157887 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:45:41.667937 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.667952 1157887 out.go:304] Setting ErrFile to fd 2...
	I0318 13:45:41.667958 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.668616 1157887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:45:41.669251 1157887 out.go:298] Setting JSON to false
	I0318 13:45:41.670283 1157887 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19689,"bootTime":1710749853,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:45:41.670349 1157887 start.go:139] virtualization: kvm guest
	I0318 13:45:41.672702 1157887 out.go:177] * [default-k8s-diff-port-569210] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:45:41.674325 1157887 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:45:41.674336 1157887 notify.go:220] Checking for updates...
	I0318 13:45:41.675874 1157887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:45:41.677543 1157887 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:45:41.679053 1157887 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:45:41.680344 1157887 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:45:41.681702 1157887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:45:41.683304 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:45:41.683743 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.683792 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.698719 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0318 13:45:41.699154 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.699657 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.699676 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.699995 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.700168 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.700488 1157887 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:45:41.700763 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.700803 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.715824 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0318 13:45:41.716270 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.716688 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.716708 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.717004 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.717185 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.747564 1157887 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:45:41.748930 1157887 start.go:297] selected driver: kvm2
	I0318 13:45:41.748944 1157887 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.749059 1157887 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:45:41.749725 1157887 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.749819 1157887 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:45:41.764225 1157887 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:45:41.764607 1157887 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:45:41.764679 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:45:41.764692 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:45:41.764727 1157887 start.go:340] cluster config:
	{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.764824 1157887 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.766561 1157887 out.go:177] * Starting "default-k8s-diff-port-569210" primary control-plane node in "default-k8s-diff-port-569210" cluster
	I0318 13:45:40.044635 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:41.767747 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:45:41.767779 1157887 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:45:41.767799 1157887 cache.go:56] Caching tarball of preloaded images
	I0318 13:45:41.767876 1157887 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:45:41.767887 1157887 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:45:41.767986 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:45:41.768151 1157887 start.go:360] acquireMachinesLock for default-k8s-diff-port-569210: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:45:46.124607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:49.196561 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:55.276657 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:58.348606 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:04.428632 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:07.500592 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:13.584558 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:16.652578 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:22.732573 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:25.804745 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:31.884579 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:34.956708 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:41.036614 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:44.108576 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:50.188610 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:53.260646 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:59.340724 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:02.412698 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:08.492603 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:11.564634 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:17.644618 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:20.716642 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:26.796585 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:29.868690 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:35.948613 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:39.020607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:45.104563 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:48.172547 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:54.252608 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:57.324659 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:03.404600 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:06.476647 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:12.556609 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:15.628640 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:21.708597 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:24.780572 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:30.860662 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:33.932528 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:40.012616 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:43.084569 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:49.164622 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:52.236652 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:58.316619 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:49:01.321139 1157416 start.go:364] duration metric: took 4m21.279664055s to acquireMachinesLock for "no-preload-537236"
	I0318 13:49:01.321252 1157416 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:01.321260 1157416 fix.go:54] fixHost starting: 
	I0318 13:49:01.321627 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:01.321658 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:01.337337 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0318 13:49:01.337793 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:01.338235 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:49:01.338262 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:01.338703 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:01.338892 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:01.339025 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:49:01.340630 1157416 fix.go:112] recreateIfNeeded on no-preload-537236: state=Stopped err=<nil>
	I0318 13:49:01.340653 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	W0318 13:49:01.340785 1157416 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:01.342565 1157416 out.go:177] * Restarting existing kvm2 VM for "no-preload-537236" ...
	I0318 13:49:01.318340 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:01.318378 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.318795 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:49:01.318829 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.319041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:49:01.321007 1157263 machine.go:97] duration metric: took 4m37.382603693s to provisionDockerMachine
	I0318 13:49:01.321051 1157263 fix.go:56] duration metric: took 4m37.403420427s for fixHost
	I0318 13:49:01.321064 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 4m37.403446357s
	W0318 13:49:01.321088 1157263 start.go:713] error starting host: provision: host is not running
	W0318 13:49:01.321225 1157263 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 13:49:01.321242 1157263 start.go:728] Will try again in 5 seconds ...
	I0318 13:49:01.343844 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Start
	I0318 13:49:01.344003 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring networks are active...
	I0318 13:49:01.344698 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network default is active
	I0318 13:49:01.345062 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network mk-no-preload-537236 is active
	I0318 13:49:01.345378 1157416 main.go:141] libmachine: (no-preload-537236) Getting domain xml...
	I0318 13:49:01.346073 1157416 main.go:141] libmachine: (no-preload-537236) Creating domain...
	I0318 13:49:02.522163 1157416 main.go:141] libmachine: (no-preload-537236) Waiting to get IP...
	I0318 13:49:02.522935 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.523347 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.523420 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.523327 1158392 retry.go:31] will retry after 276.248352ms: waiting for machine to come up
	I0318 13:49:02.800962 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.801439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.801472 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.801381 1158392 retry.go:31] will retry after 318.94167ms: waiting for machine to come up
	I0318 13:49:03.121895 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.122276 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.122298 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.122254 1158392 retry.go:31] will retry after 353.742872ms: waiting for machine to come up
	I0318 13:49:03.477885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.478401 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.478439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.478360 1158392 retry.go:31] will retry after 481.537084ms: waiting for machine to come up
	I0318 13:49:03.960991 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.961432 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.961505 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.961416 1158392 retry.go:31] will retry after 647.244695ms: waiting for machine to come up
	I0318 13:49:04.610150 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:04.610563 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:04.610604 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:04.610512 1158392 retry.go:31] will retry after 577.22264ms: waiting for machine to come up
	I0318 13:49:06.321404 1157263 start.go:360] acquireMachinesLock for embed-certs-173036: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:49:05.189300 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:05.189688 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:05.189722 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:05.189635 1158392 retry.go:31] will retry after 1.064347528s: waiting for machine to come up
	I0318 13:49:06.255734 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:06.256071 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:06.256103 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:06.256016 1158392 retry.go:31] will retry after 1.359025709s: waiting for machine to come up
	I0318 13:49:07.616847 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:07.617313 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:07.617338 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:07.617265 1158392 retry.go:31] will retry after 1.844112s: waiting for machine to come up
	I0318 13:49:09.464239 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:09.464761 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:09.464788 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:09.464703 1158392 retry.go:31] will retry after 1.984375986s: waiting for machine to come up
	I0318 13:49:11.450609 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:11.451100 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:11.451153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:11.451037 1158392 retry.go:31] will retry after 1.944733714s: waiting for machine to come up
	I0318 13:49:13.397815 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:13.398238 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:13.398265 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:13.398190 1158392 retry.go:31] will retry after 2.44494826s: waiting for machine to come up
	I0318 13:49:15.845711 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:15.846169 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:15.846212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:15.846128 1158392 retry.go:31] will retry after 2.760857339s: waiting for machine to come up
	I0318 13:49:18.609516 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:18.609917 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:18.609942 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:18.609872 1158392 retry.go:31] will retry after 3.501792324s: waiting for machine to come up
	I0318 13:49:23.501689 1157708 start.go:364] duration metric: took 4m10.403284517s to acquireMachinesLock for "old-k8s-version-909137"
	I0318 13:49:23.501769 1157708 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:23.501783 1157708 fix.go:54] fixHost starting: 
	I0318 13:49:23.502238 1157708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:23.502279 1157708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:23.520223 1157708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0318 13:49:23.520696 1157708 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:23.521273 1157708 main.go:141] libmachine: Using API Version  1
	I0318 13:49:23.521304 1157708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:23.521693 1157708 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:23.521934 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:23.522089 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetState
	I0318 13:49:23.523696 1157708 fix.go:112] recreateIfNeeded on old-k8s-version-909137: state=Stopped err=<nil>
	I0318 13:49:23.523738 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	W0318 13:49:23.523894 1157708 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:23.526253 1157708 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-909137" ...
	I0318 13:49:22.113291 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.113733 1157416 main.go:141] libmachine: (no-preload-537236) Found IP for machine: 192.168.39.7
	I0318 13:49:22.113753 1157416 main.go:141] libmachine: (no-preload-537236) Reserving static IP address...
	I0318 13:49:22.113787 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has current primary IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.114159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.114179 1157416 main.go:141] libmachine: (no-preload-537236) DBG | skip adding static IP to network mk-no-preload-537236 - found existing host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"}
	I0318 13:49:22.114192 1157416 main.go:141] libmachine: (no-preload-537236) Reserved static IP address: 192.168.39.7
	I0318 13:49:22.114201 1157416 main.go:141] libmachine: (no-preload-537236) Waiting for SSH to be available...
	I0318 13:49:22.114208 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Getting to WaitForSSH function...
	I0318 13:49:22.116603 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.116944 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.116971 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.117082 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH client type: external
	I0318 13:49:22.117153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa (-rw-------)
	I0318 13:49:22.117192 1157416 main.go:141] libmachine: (no-preload-537236) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:22.117212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | About to run SSH command:
	I0318 13:49:22.117236 1157416 main.go:141] libmachine: (no-preload-537236) DBG | exit 0
	I0318 13:49:22.240543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:22.240913 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetConfigRaw
	I0318 13:49:22.241611 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.244016 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244273 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.244302 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244506 1157416 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/config.json ...
	I0318 13:49:22.244729 1157416 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:22.244750 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:22.244947 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.246869 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247160 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.247198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247246 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.247401 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247546 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247722 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.247893 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.248160 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.248174 1157416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:22.353134 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:22.353164 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353435 1157416 buildroot.go:166] provisioning hostname "no-preload-537236"
	I0318 13:49:22.353463 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353636 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.356058 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356463 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.356491 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356645 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.356846 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.356965 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.357068 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.357201 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.357415 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.357434 1157416 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-537236 && echo "no-preload-537236" | sudo tee /etc/hostname
	I0318 13:49:22.477651 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-537236
	
	I0318 13:49:22.477692 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.480537 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.480876 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.480905 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.481135 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.481342 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481520 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481676 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.481887 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.482066 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.482082 1157416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-537236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-537236/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-537236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:22.599489 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:22.599566 1157416 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:22.599596 1157416 buildroot.go:174] setting up certificates
	I0318 13:49:22.599609 1157416 provision.go:84] configureAuth start
	I0318 13:49:22.599624 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.599981 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.602425 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602800 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.602831 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602986 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.605036 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605331 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.605356 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605500 1157416 provision.go:143] copyHostCerts
	I0318 13:49:22.605589 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:22.605600 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:22.605665 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:22.605786 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:22.605795 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:22.605820 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:22.605895 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:22.605904 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:22.605927 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:22.606003 1157416 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.no-preload-537236 san=[127.0.0.1 192.168.39.7 localhost minikube no-preload-537236]
	I0318 13:49:22.810156 1157416 provision.go:177] copyRemoteCerts
	I0318 13:49:22.810249 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:22.810283 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.813018 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813343 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.813376 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813557 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.813743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.813890 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.814080 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:22.898886 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:22.926296 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 13:49:22.953260 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:49:22.981248 1157416 provision.go:87] duration metric: took 381.624842ms to configureAuth
	I0318 13:49:22.981281 1157416 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:22.981459 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:49:22.981573 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.984446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.984848 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.984885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.985061 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.985269 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985405 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985595 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.985728 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.985911 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.985925 1157416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:23.259439 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:23.259470 1157416 machine.go:97] duration metric: took 1.014725867s to provisionDockerMachine
	I0318 13:49:23.259483 1157416 start.go:293] postStartSetup for "no-preload-537236" (driver="kvm2")
	I0318 13:49:23.259518 1157416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:23.259553 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.259937 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:23.259976 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.262875 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263196 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.263228 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263403 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.263684 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.263861 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.264029 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.348815 1157416 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:23.353550 1157416 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:23.353582 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:23.353659 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:23.353759 1157416 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:23.353885 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:23.364831 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:23.391345 1157416 start.go:296] duration metric: took 131.846395ms for postStartSetup
	I0318 13:49:23.391396 1157416 fix.go:56] duration metric: took 22.070135111s for fixHost
	I0318 13:49:23.391423 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.394229 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.394583 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394685 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.394937 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395111 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395266 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.395433 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:23.395619 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:23.395631 1157416 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:23.501504 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769763.449975975
	
	I0318 13:49:23.501532 1157416 fix.go:216] guest clock: 1710769763.449975975
	I0318 13:49:23.501542 1157416 fix.go:229] Guest: 2024-03-18 13:49:23.449975975 +0000 UTC Remote: 2024-03-18 13:49:23.39140181 +0000 UTC m=+283.498114537 (delta=58.574165ms)
	I0318 13:49:23.501564 1157416 fix.go:200] guest clock delta is within tolerance: 58.574165ms
	I0318 13:49:23.501584 1157416 start.go:83] releasing machines lock for "no-preload-537236", held for 22.180386627s
	I0318 13:49:23.501612 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.501900 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:23.504693 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505130 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.505159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505331 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.505889 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506092 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506198 1157416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:23.506252 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.506317 1157416 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:23.506351 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.509104 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509414 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509465 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509625 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.509819 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509839 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509853 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510043 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510103 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.510207 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.510261 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510394 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510541 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.616831 1157416 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:23.624184 1157416 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:23.779709 1157416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:23.786535 1157416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:23.786594 1157416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:23.805716 1157416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:23.805743 1157416 start.go:494] detecting cgroup driver to use...
	I0318 13:49:23.805850 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:23.825572 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:23.842762 1157416 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:23.842817 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:23.859385 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:23.876416 1157416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:24.005995 1157416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:24.193107 1157416 docker.go:233] disabling docker service ...
	I0318 13:49:24.193173 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:24.212825 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:24.230448 1157416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:24.385445 1157416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:24.548640 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:24.564678 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:24.592528 1157416 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:49:24.592601 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.604303 1157416 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:24.604394 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.616123 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.627956 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.639194 1157416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:24.650789 1157416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:24.661390 1157416 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:24.661443 1157416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:24.677180 1157416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:24.687973 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:24.827386 1157416 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:24.978805 1157416 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:24.978898 1157416 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:24.985647 1157416 start.go:562] Will wait 60s for crictl version
	I0318 13:49:24.985735 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:24.990325 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:25.038948 1157416 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:25.039020 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.068855 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.107104 1157416 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 13:49:23.527811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .Start
	I0318 13:49:23.528000 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring networks are active...
	I0318 13:49:23.528714 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network default is active
	I0318 13:49:23.529036 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network mk-old-k8s-version-909137 is active
	I0318 13:49:23.529491 1157708 main.go:141] libmachine: (old-k8s-version-909137) Getting domain xml...
	I0318 13:49:23.530324 1157708 main.go:141] libmachine: (old-k8s-version-909137) Creating domain...
	I0318 13:49:24.765648 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting to get IP...
	I0318 13:49:24.766664 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:24.767122 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:24.767182 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:24.767081 1158507 retry.go:31] will retry after 250.785143ms: waiting for machine to come up
	I0318 13:49:25.019755 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.020238 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.020273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.020185 1158507 retry.go:31] will retry after 346.894257ms: waiting for machine to come up
	I0318 13:49:25.368815 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.369335 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.369372 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.369268 1158507 retry.go:31] will retry after 367.316359ms: waiting for machine to come up
	I0318 13:49:25.737835 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.738404 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.738438 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.738337 1158507 retry.go:31] will retry after 479.291041ms: waiting for machine to come up
	I0318 13:49:26.219103 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.219568 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.219599 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.219523 1158507 retry.go:31] will retry after 552.309382ms: waiting for machine to come up
	I0318 13:49:26.773363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.773905 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.773935 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.773857 1158507 retry.go:31] will retry after 703.087388ms: waiting for machine to come up
	I0318 13:49:27.478730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:27.479330 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:27.479363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:27.479270 1158507 retry.go:31] will retry after 1.136606935s: waiting for machine to come up
	I0318 13:49:25.108504 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:25.111416 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.111795 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:25.111827 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.112035 1157416 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:25.116688 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:25.131526 1157416 kubeadm.go:877] updating cluster {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:25.131663 1157416 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 13:49:25.131698 1157416 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:25.176340 1157416 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 13:49:25.176378 1157416 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:25.176474 1157416 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.176487 1157416 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.176524 1157416 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.176537 1157416 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.176592 1157416 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.176619 1157416 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.176773 1157416 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 13:49:25.176789 1157416 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178485 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.178486 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.178488 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.178480 1157416 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.178540 1157416 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 13:49:25.178911 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334172 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334873 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 13:49:25.338330 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.338825 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.340192 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.350053 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.356621 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.472528 1157416 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 13:49:25.472571 1157416 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.472627 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.630923 1157416 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 13:49:25.630996 1157416 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.631001 1157416 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 13:49:25.631042 1157416 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.630933 1157416 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 13:49:25.631089 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631102 1157416 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 13:49:25.631134 1157416 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.631107 1157416 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.631169 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631183 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631052 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631199 1157416 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 13:49:25.631220 1157416 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.631233 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.631264 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.642598 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.708001 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.708026 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708068 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.708003 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.708129 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708162 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.708225 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.708286 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.790492 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.790623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.804436 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804465 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804503 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 13:49:25.804532 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804583 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:25.804657 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 13:49:25.804684 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804720 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804768 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804801 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:25.807681 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 13:49:26.162719 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.887846 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.083277557s)
	I0318 13:49:27.887882 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.083274384s)
	I0318 13:49:27.887894 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 13:49:27.887916 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 13:49:27.887927 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.887944 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.083121634s)
	I0318 13:49:27.887971 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 13:49:27.887971 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.083181595s)
	I0318 13:49:27.887990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 13:49:27.888003 1157416 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.725256044s)
	I0318 13:49:27.888008 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.888040 1157416 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 13:49:27.888080 1157416 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.888114 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:27.893415 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:28.617273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:28.617711 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:28.617740 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:28.617665 1158507 retry.go:31] will retry after 947.818334ms: waiting for machine to come up
	I0318 13:49:29.566814 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:29.567157 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:29.567177 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:29.567121 1158507 retry.go:31] will retry after 1.328243934s: waiting for machine to come up
	I0318 13:49:30.897514 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:30.898041 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:30.898068 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:30.897988 1158507 retry.go:31] will retry after 2.213855703s: waiting for machine to come up
	I0318 13:49:30.272393 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.384351202s)
	I0318 13:49:30.272442 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 13:49:30.272459 1157416 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.379011748s)
	I0318 13:49:30.272477 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272508 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 13:49:30.272589 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:32.857821 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.585192694s)
	I0318 13:49:32.857907 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.585263486s)
	I0318 13:49:32.857990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 13:49:32.857918 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 13:49:32.858038 1157416 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:32.858097 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:33.113781 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:33.114303 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:33.114332 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:33.114245 1158507 retry.go:31] will retry after 2.075415123s: waiting for machine to come up
	I0318 13:49:35.191096 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:35.191631 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:35.191665 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:35.191582 1158507 retry.go:31] will retry after 3.520577528s: waiting for machine to come up
	I0318 13:49:36.677356 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.8192286s)
	I0318 13:49:36.677398 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 13:49:36.677423 1157416 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:36.677464 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:38.844843 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.167353366s)
	I0318 13:49:38.844895 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 13:49:38.844933 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.845020 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.713777 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:38.714129 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:38.714242 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:38.714143 1158507 retry.go:31] will retry after 3.46520277s: waiting for machine to come up
	I0318 13:49:42.181399 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181856 1157708 main.go:141] libmachine: (old-k8s-version-909137) Found IP for machine: 192.168.72.135
	I0318 13:49:42.181888 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has current primary IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181897 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserving static IP address...
	I0318 13:49:42.182344 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.182387 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | skip adding static IP to network mk-old-k8s-version-909137 - found existing host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"}
	I0318 13:49:42.182424 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserved static IP address: 192.168.72.135
	I0318 13:49:42.182453 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting for SSH to be available...
	I0318 13:49:42.182470 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Getting to WaitForSSH function...
	I0318 13:49:42.184589 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.184958 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.184999 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.185061 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH client type: external
	I0318 13:49:42.185120 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa (-rw-------)
	I0318 13:49:42.185162 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:42.185189 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | About to run SSH command:
	I0318 13:49:42.185204 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | exit 0
	I0318 13:49:42.312570 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:42.313005 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetConfigRaw
	I0318 13:49:42.313693 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.316497 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.316931 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.316965 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.317239 1157708 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json ...
	I0318 13:49:42.317442 1157708 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:42.317462 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:42.317688 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.320076 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320444 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.320485 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320655 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.320818 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.320980 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.321093 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.321257 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.321510 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.321528 1157708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:42.433138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:42.433186 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433524 1157708 buildroot.go:166] provisioning hostname "old-k8s-version-909137"
	I0318 13:49:42.433558 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433808 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.436869 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437230 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.437264 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437506 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.437739 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.437915 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.438092 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.438285 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.438513 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.438534 1157708 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-909137 && echo "old-k8s-version-909137" | sudo tee /etc/hostname
	I0318 13:49:42.560410 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-909137
	
	I0318 13:49:42.560439 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.563304 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563637 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.563673 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563837 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.564053 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564236 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564377 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.564581 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.564802 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.564820 1157708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-909137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-909137/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-909137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:42.687138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:42.687173 1157708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:42.687199 1157708 buildroot.go:174] setting up certificates
	I0318 13:49:42.687211 1157708 provision.go:84] configureAuth start
	I0318 13:49:42.687223 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.687600 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.690738 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.691179 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691316 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.693730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694070 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.694092 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694255 1157708 provision.go:143] copyHostCerts
	I0318 13:49:42.694336 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:42.694350 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:42.694422 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:42.694597 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:42.694614 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:42.694652 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:42.694747 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:42.694756 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:42.694775 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:42.694823 1157708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-909137 san=[127.0.0.1 192.168.72.135 localhost minikube old-k8s-version-909137]
	I0318 13:49:42.920182 1157708 provision.go:177] copyRemoteCerts
	I0318 13:49:42.920255 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:42.920295 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.923074 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923374 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.923408 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923533 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.923755 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.923957 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.924095 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.649771 1157887 start.go:364] duration metric: took 4m1.881584436s to acquireMachinesLock for "default-k8s-diff-port-569210"
	I0318 13:49:43.649850 1157887 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:43.649868 1157887 fix.go:54] fixHost starting: 
	I0318 13:49:43.650335 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:43.650378 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:43.668606 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0318 13:49:43.669107 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:43.669721 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:49:43.669755 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:43.670092 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:43.670269 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:49:43.670427 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:49:43.671973 1157887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-569210: state=Stopped err=<nil>
	I0318 13:49:43.672021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	W0318 13:49:43.672150 1157887 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:43.673832 1157887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-569210" ...
	I0318 13:49:40.621208 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.776156882s)
	I0318 13:49:40.621252 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 13:49:40.621281 1157416 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:40.621322 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:41.582256 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 13:49:41.582316 1157416 cache_images.go:123] Successfully loaded all cached images
	I0318 13:49:41.582324 1157416 cache_images.go:92] duration metric: took 16.405930257s to LoadCachedImages
	I0318 13:49:41.582341 1157416 kubeadm.go:928] updating node { 192.168.39.7 8443 v1.29.0-rc.2 crio true true} ...
	I0318 13:49:41.582550 1157416 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-537236 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:41.582663 1157416 ssh_runner.go:195] Run: crio config
	I0318 13:49:41.635043 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:41.635074 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:41.635093 1157416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:41.635128 1157416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-537236 NodeName:no-preload-537236 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:49:41.635322 1157416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-537236"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:41.635446 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 13:49:41.647072 1157416 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:41.647148 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:41.657448 1157416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0318 13:49:41.675819 1157416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 13:49:41.693989 1157416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0318 13:49:41.714954 1157416 ssh_runner.go:195] Run: grep 192.168.39.7	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:41.719161 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:41.732228 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:41.871286 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:41.892827 1157416 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236 for IP: 192.168.39.7
	I0318 13:49:41.892850 1157416 certs.go:194] generating shared ca certs ...
	I0318 13:49:41.892868 1157416 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:41.893054 1157416 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:41.893110 1157416 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:41.893125 1157416 certs.go:256] generating profile certs ...
	I0318 13:49:41.893246 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/client.key
	I0318 13:49:41.893317 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key.844e83a6
	I0318 13:49:41.893366 1157416 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key
	I0318 13:49:41.893482 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:41.893518 1157416 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:41.893528 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:41.893552 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:41.893573 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:41.893594 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:41.893628 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:41.894503 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:41.942278 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:41.978436 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:42.007161 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:42.036410 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:49:42.073179 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:42.098201 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:42.131599 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:42.159159 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:42.186290 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:42.214362 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:42.241240 1157416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:42.260511 1157416 ssh_runner.go:195] Run: openssl version
	I0318 13:49:42.267047 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:42.278582 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283566 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283609 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.289658 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:42.300954 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:42.312828 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319182 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319251 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.325767 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:42.337544 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:42.349053 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354197 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354249 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.361200 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:42.374825 1157416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:42.380098 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:42.387161 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:42.393702 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:42.400193 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:42.406243 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:42.412423 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:42.418599 1157416 kubeadm.go:391] StartCluster: {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:42.418747 1157416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:42.418785 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.468980 1157416 cri.go:89] found id: ""
	I0318 13:49:42.469088 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:42.481101 1157416 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:42.481130 1157416 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:42.481137 1157416 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:42.481190 1157416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:42.493014 1157416 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:42.494041 1157416 kubeconfig.go:125] found "no-preload-537236" server: "https://192.168.39.7:8443"
	I0318 13:49:42.496519 1157416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:42.507415 1157416 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.7
	I0318 13:49:42.507448 1157416 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:42.507460 1157416 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:42.507513 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.554791 1157416 cri.go:89] found id: ""
	I0318 13:49:42.554859 1157416 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:42.574054 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:42.584928 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:42.584955 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:42.585009 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:42.594987 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:42.595045 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:42.605058 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:42.614968 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:42.615042 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:42.625169 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.634838 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:42.634905 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.644785 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:42.654196 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:42.654254 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:42.663757 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:42.673956 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:42.792913 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:43.799012 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.006050828s)
	I0318 13:49:43.799075 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.061808 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.189349 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.329800 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:44.329897 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:44.829990 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:43.007024 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:43.033952 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 13:49:43.060218 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:49:43.086087 1157708 provision.go:87] duration metric: took 398.861833ms to configureAuth
	I0318 13:49:43.086116 1157708 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:43.086326 1157708 config.go:182] Loaded profile config "old-k8s-version-909137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 13:49:43.086442 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.089200 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089534 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.089562 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089758 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.089965 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090134 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090286 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.090501 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.090718 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.090744 1157708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:43.401681 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:43.401715 1157708 machine.go:97] duration metric: took 1.084258164s to provisionDockerMachine
	I0318 13:49:43.401728 1157708 start.go:293] postStartSetup for "old-k8s-version-909137" (driver="kvm2")
	I0318 13:49:43.401739 1157708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:43.401759 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.402073 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:43.402116 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.404775 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405164 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.405192 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405335 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.405525 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.405740 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.405884 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.493000 1157708 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:43.497705 1157708 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:43.497740 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:43.497818 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:43.497931 1157708 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:43.498058 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:43.509185 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:43.535401 1157708 start.go:296] duration metric: took 133.657179ms for postStartSetup
	I0318 13:49:43.535454 1157708 fix.go:56] duration metric: took 20.033670705s for fixHost
	I0318 13:49:43.535482 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.538464 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.538964 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.538998 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.539178 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.539386 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539528 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539702 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.539899 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.540120 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.540133 1157708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:43.649578 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769783.596310102
	
	I0318 13:49:43.649610 1157708 fix.go:216] guest clock: 1710769783.596310102
	I0318 13:49:43.649621 1157708 fix.go:229] Guest: 2024-03-18 13:49:43.596310102 +0000 UTC Remote: 2024-03-18 13:49:43.535459129 +0000 UTC m=+270.592972067 (delta=60.850973ms)
	I0318 13:49:43.649656 1157708 fix.go:200] guest clock delta is within tolerance: 60.850973ms
	I0318 13:49:43.649663 1157708 start.go:83] releasing machines lock for "old-k8s-version-909137", held for 20.147918331s
	I0318 13:49:43.649689 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.650002 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:43.652712 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653114 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.653148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653278 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.653873 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654112 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654198 1157708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:43.654264 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.654333 1157708 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:43.654369 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.657281 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657390 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657741 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.657830 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657855 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657918 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.658016 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658065 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.658199 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658245 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658326 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.658411 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658574 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.737787 1157708 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:43.769157 1157708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:43.920376 1157708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:43.928165 1157708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:43.928253 1157708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:43.946102 1157708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:43.946133 1157708 start.go:494] detecting cgroup driver to use...
	I0318 13:49:43.946210 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:43.963482 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:43.978540 1157708 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:43.978613 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:43.999525 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:44.021242 1157708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:44.198165 1157708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:44.363408 1157708 docker.go:233] disabling docker service ...
	I0318 13:49:44.363474 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:44.383527 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:44.398888 1157708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:44.547711 1157708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:44.662762 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:44.678786 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:44.702931 1157708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 13:49:44.703004 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.721453 1157708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:44.721519 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.739487 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.757379 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.777508 1157708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:44.798788 1157708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:44.814280 1157708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:44.814383 1157708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:44.836507 1157708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:44.852614 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:44.994352 1157708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:45.184815 1157708 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:45.184907 1157708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:45.190649 1157708 start.go:562] Will wait 60s for crictl version
	I0318 13:49:45.190724 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:45.195265 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:45.242737 1157708 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:45.242850 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.288154 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.331441 1157708 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 13:49:43.675531 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Start
	I0318 13:49:43.675763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring networks are active...
	I0318 13:49:43.676642 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network default is active
	I0318 13:49:43.677014 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network mk-default-k8s-diff-port-569210 is active
	I0318 13:49:43.677510 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Getting domain xml...
	I0318 13:49:43.678319 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Creating domain...
	I0318 13:49:45.002977 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting to get IP...
	I0318 13:49:45.003870 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004406 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004499 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.004392 1158648 retry.go:31] will retry after 294.950888ms: waiting for machine to come up
	I0318 13:49:45.301264 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301835 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301863 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.301747 1158648 retry.go:31] will retry after 291.810051ms: waiting for machine to come up
	I0318 13:49:45.595571 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596720 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596832 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.596786 1158648 retry.go:31] will retry after 390.232445ms: waiting for machine to come up
	I0318 13:49:45.988661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989534 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.989393 1158648 retry.go:31] will retry after 487.148784ms: waiting for machine to come up
	I0318 13:49:46.477982 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478667 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478701 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.478600 1158648 retry.go:31] will retry after 474.795485ms: waiting for machine to come up
	I0318 13:49:45.332975 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:45.336274 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336701 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:45.336753 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336985 1157708 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:45.343147 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:45.361840 1157708 kubeadm.go:877] updating cluster {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:45.361982 1157708 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:49:45.362040 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:45.419490 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:45.419587 1157708 ssh_runner.go:195] Run: which lz4
	I0318 13:49:45.424689 1157708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:49:45.431110 1157708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:49:45.431155 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 13:49:47.510385 1157708 crio.go:444] duration metric: took 2.085724633s to copy over tarball
	I0318 13:49:47.510483 1157708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:49:45.330925 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:45.364854 1157416 api_server.go:72] duration metric: took 1.035057096s to wait for apiserver process to appear ...
	I0318 13:49:45.364883 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:49:45.364927 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:45.365577 1157416 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": dial tcp 192.168.39.7:8443: connect: connection refused
	I0318 13:49:45.865126 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.135799 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.135840 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.135862 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.154112 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.154142 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.365566 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.375812 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.375862 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:49.865027 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.873132 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.873176 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.365178 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.371461 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.371506 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.865038 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.870329 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.870383 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:51.365030 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:51.370284 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:49:51.379599 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:49:51.379633 1157416 api_server.go:131] duration metric: took 6.014741397s to wait for apiserver health ...
	I0318 13:49:51.379645 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:51.379654 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:51.582399 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:49:46.955128 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955620 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955649 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.955579 1158648 retry.go:31] will retry after 817.278037ms: waiting for machine to come up
	I0318 13:49:47.774954 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775449 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775480 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:47.775391 1158648 retry.go:31] will retry after 1.032655883s: waiting for machine to come up
	I0318 13:49:48.810156 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810699 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810730 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:48.810644 1158648 retry.go:31] will retry after 1.1441145s: waiting for machine to come up
	I0318 13:49:49.956702 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957179 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957214 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:49.957105 1158648 retry.go:31] will retry after 1.428592019s: waiting for machine to come up
	I0318 13:49:51.387025 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387627 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387660 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:51.387555 1158648 retry.go:31] will retry after 2.266795202s: waiting for machine to come up
	I0318 13:49:50.947045 1157708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.436514023s)
	I0318 13:49:50.947084 1157708 crio.go:451] duration metric: took 3.436661543s to extract the tarball
	I0318 13:49:50.947095 1157708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:49:51.007406 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:51.048060 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:51.048091 1157708 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:51.048181 1157708 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.048228 1157708 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.048287 1157708 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.048346 1157708 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 13:49:51.048398 1157708 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.048432 1157708 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.048232 1157708 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.048183 1157708 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.049960 1157708 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.050268 1157708 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.050288 1157708 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.050355 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.050594 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.050627 1157708 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 13:49:51.050584 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.051230 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.219906 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.220734 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.235283 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.236445 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.246700 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 13:49:51.251299 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.311054 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.311292 1157708 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 13:49:51.311336 1157708 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.311389 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.343594 1157708 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 13:49:51.343649 1157708 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.343739 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.391608 1157708 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 13:49:51.391657 1157708 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.391706 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.448987 1157708 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 13:49:51.449029 1157708 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 13:49:51.449058 1157708 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.449061 1157708 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 13:49:51.449088 1157708 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.449103 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449035 1157708 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 13:49:51.449135 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.449178 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449207 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.449245 1157708 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 13:49:51.449267 1157708 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.449317 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449210 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449223 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.469614 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.469613 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.562455 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 13:49:51.562506 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.564170 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 13:49:51.564269 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 13:49:51.578471 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 13:49:51.615689 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 13:49:51.615708 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 13:49:51.657287 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 13:49:51.657361 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 13:49:51.956746 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:52.106933 1157708 cache_images.go:92] duration metric: took 1.058823514s to LoadCachedImages
	W0318 13:49:52.107046 1157708 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0318 13:49:52.107064 1157708 kubeadm.go:928] updating node { 192.168.72.135 8443 v1.20.0 crio true true} ...
	I0318 13:49:52.107259 1157708 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-909137 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:52.107348 1157708 ssh_runner.go:195] Run: crio config
	I0318 13:49:52.163493 1157708 cni.go:84] Creating CNI manager for ""
	I0318 13:49:52.163526 1157708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:52.163546 1157708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:52.163572 1157708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.135 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-909137 NodeName:old-k8s-version-909137 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 13:49:52.163740 1157708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-909137"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:52.163818 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 13:49:52.175668 1157708 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:52.175740 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:52.186745 1157708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 13:49:52.209877 1157708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:49:52.232921 1157708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 13:49:52.256571 1157708 ssh_runner.go:195] Run: grep 192.168.72.135	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:52.262776 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:52.278435 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:52.422705 1157708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:52.443710 1157708 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137 for IP: 192.168.72.135
	I0318 13:49:52.443740 1157708 certs.go:194] generating shared ca certs ...
	I0318 13:49:52.443760 1157708 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:52.443951 1157708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:52.444009 1157708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:52.444023 1157708 certs.go:256] generating profile certs ...
	I0318 13:49:52.444155 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.key
	I0318 13:49:52.444239 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key.e9806bd6
	I0318 13:49:52.444303 1157708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key
	I0318 13:49:52.444492 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:52.444532 1157708 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:52.444548 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:52.444585 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:52.444633 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:52.444672 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:52.444729 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:52.445363 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:52.506720 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:52.550057 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:52.586845 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:52.627933 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 13:49:52.681479 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:52.722052 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:52.755021 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:52.782181 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:52.808269 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:52.835041 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:52.863776 1157708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:52.883579 1157708 ssh_runner.go:195] Run: openssl version
	I0318 13:49:52.889846 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:52.902288 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908241 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908302 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.915392 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:52.928374 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:52.941444 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946463 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946514 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.953447 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:52.966231 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:52.977986 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982748 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982809 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.988715 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:51.626774 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:49:51.642685 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:49:51.669902 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:49:51.759474 1157416 system_pods.go:59] 8 kube-system pods found
	I0318 13:49:51.759519 1157416 system_pods.go:61] "coredns-76f75df574-kxzfm" [d0aad76d-f135-4d4a-a2f5-117707b4b2f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:49:51.759530 1157416 system_pods.go:61] "etcd-no-preload-537236" [d02ad01c-1b16-4b97-be18-237b1cbfe3aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:49:51.759539 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [00b05050-229b-47f4-9af2-12be1711200a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:49:51.759548 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [3e7b86df-4111-4bd9-8925-a22cf12e10ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:49:51.759552 1157416 system_pods.go:61] "kube-proxy-5dspp" [adee19a0-eeb6-438f-a55d-30f1e1d87ef6] Running
	I0318 13:49:51.759557 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [17628d51-80f5-4985-8ddb-151cab8f8c5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:49:51.759562 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-hhh5m" [282de489-beee-47a9-bd29-5da43cf70146] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:49:51.759565 1157416 system_pods.go:61] "storage-provisioner" [97d3de68-0863-4bba-9cb1-2ce98d791935] Running
	I0318 13:49:51.759578 1157416 system_pods.go:74] duration metric: took 89.654007ms to wait for pod list to return data ...
	I0318 13:49:51.759591 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:49:51.764164 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:49:51.764191 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:49:51.764204 1157416 node_conditions.go:105] duration metric: took 4.607295ms to run NodePressure ...
	I0318 13:49:51.764227 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:52.645812 1157416 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653573 1157416 kubeadm.go:733] kubelet initialised
	I0318 13:49:52.653602 1157416 kubeadm.go:734] duration metric: took 7.75557ms waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653614 1157416 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:49:52.662179 1157416 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:54.678656 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:53.656476 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656913 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656943 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:53.656870 1158648 retry.go:31] will retry after 2.341702781s: waiting for machine to come up
	I0318 13:49:56.001662 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002163 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002188 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:56.002106 1158648 retry.go:31] will retry after 2.885262489s: waiting for machine to come up
	I0318 13:49:53.000141 1157708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:53.005021 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:53.011156 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:53.018329 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:53.025687 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:53.032199 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:53.039048 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:53.045789 1157708 kubeadm.go:391] StartCluster: {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:53.045882 1157708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:53.045931 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.085682 1157708 cri.go:89] found id: ""
	I0318 13:49:53.085788 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:53.098063 1157708 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:53.098091 1157708 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:53.098098 1157708 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:53.098153 1157708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:53.109692 1157708 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:53.110853 1157708 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-909137" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:49:53.111862 1157708 kubeconfig.go:62] /home/jenkins/minikube-integration/18429-1106816/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-909137" cluster setting kubeconfig missing "old-k8s-version-909137" context setting]
	I0318 13:49:53.113334 1157708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:53.115135 1157708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:53.125910 1157708 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.135
	I0318 13:49:53.125949 1157708 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:53.125965 1157708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:53.126029 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.172181 1157708 cri.go:89] found id: ""
	I0318 13:49:53.172268 1157708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:53.189585 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:53.200744 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:53.200768 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:53.200811 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:53.211176 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:53.211250 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:53.221744 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:53.231342 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:53.231404 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:53.242162 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.252408 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:53.252480 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.262690 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:53.272829 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:53.272903 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:53.283287 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:53.294124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:53.437482 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.297415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.588919 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.758204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.863030 1157708 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:54.863140 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.363708 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.863301 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.364064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.863896 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.363240 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.212652 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:57.669562 1157416 pod_ready.go:92] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:57.669584 1157416 pod_ready.go:81] duration metric: took 5.007366512s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:57.669597 1157416 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176528 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:58.176557 1157416 pod_ready.go:81] duration metric: took 506.95201ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176570 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.888400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888706 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888742 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:58.888681 1158648 retry.go:31] will retry after 4.094701536s: waiting for machine to come up
	I0318 13:49:58.363294 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:58.864051 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.363586 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.863802 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.363862 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.864277 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.363381 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.864307 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.363278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.863315 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.309987 1157263 start.go:364] duration metric: took 57.988518292s to acquireMachinesLock for "embed-certs-173036"
	I0318 13:50:04.310046 1157263 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:50:04.310062 1157263 fix.go:54] fixHost starting: 
	I0318 13:50:04.310469 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:50:04.310506 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:50:04.330585 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0318 13:50:04.331049 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:50:04.331648 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:50:04.331684 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:50:04.332066 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:50:04.332316 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:04.332513 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:50:04.334091 1157263 fix.go:112] recreateIfNeeded on embed-certs-173036: state=Stopped err=<nil>
	I0318 13:50:04.334117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	W0318 13:50:04.334299 1157263 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:50:04.336146 1157263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-173036" ...
	I0318 13:50:00.184168 1157416 pod_ready.go:102] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:01.183846 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:01.183872 1157416 pod_ready.go:81] duration metric: took 3.007292631s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:01.183884 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:03.206725 1157416 pod_ready.go:102] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:04.691357 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.691391 1157416 pod_ready.go:81] duration metric: took 3.507497259s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.691410 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696593 1157416 pod_ready.go:92] pod "kube-proxy-5dspp" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.696618 1157416 pod_ready.go:81] duration metric: took 5.198628ms for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696627 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.700977 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.700995 1157416 pod_ready.go:81] duration metric: took 4.36095ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.701006 1157416 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:02.985340 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985804 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has current primary IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985818 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Found IP for machine: 192.168.61.3
	I0318 13:50:02.985828 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserving static IP address...
	I0318 13:50:02.986233 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.986292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | skip adding static IP to network mk-default-k8s-diff-port-569210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"}
	I0318 13:50:02.986307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserved static IP address: 192.168.61.3
	I0318 13:50:02.986321 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for SSH to be available...
	I0318 13:50:02.986337 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Getting to WaitForSSH function...
	I0318 13:50:02.988609 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.988962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.988995 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.989209 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH client type: external
	I0318 13:50:02.989235 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa (-rw-------)
	I0318 13:50:02.989272 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:02.989293 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | About to run SSH command:
	I0318 13:50:02.989306 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | exit 0
	I0318 13:50:03.112557 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:03.112907 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetConfigRaw
	I0318 13:50:03.113605 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.116140 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116569 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.116599 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116858 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:50:03.117065 1157887 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:03.117091 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:03.117296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.119506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.119861 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.119891 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.120015 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.120212 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120429 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120608 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.120798 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.120995 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.121010 1157887 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:03.221645 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:03.221693 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.221990 1157887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-569210"
	I0318 13:50:03.222027 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.222257 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.225134 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225543 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.225568 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225714 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.226022 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226225 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.226595 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.226870 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.226893 1157887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-569210 && echo "default-k8s-diff-port-569210" | sudo tee /etc/hostname
	I0318 13:50:03.350362 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-569210
	
	I0318 13:50:03.350398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.353307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353700 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.353737 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353911 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.354111 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354283 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354413 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.354600 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.354805 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.354824 1157887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-569210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-569210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-569210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:03.471084 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:03.471120 1157887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:03.471159 1157887 buildroot.go:174] setting up certificates
	I0318 13:50:03.471229 1157887 provision.go:84] configureAuth start
	I0318 13:50:03.471247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.471576 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.474528 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.474918 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.474957 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.475210 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.477624 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.477910 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.477936 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.478118 1157887 provision.go:143] copyHostCerts
	I0318 13:50:03.478196 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:03.478213 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:03.478281 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:03.478424 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:03.478437 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:03.478466 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:03.478537 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:03.478548 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:03.478571 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:03.478640 1157887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-569210 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-569210 localhost minikube]
	I0318 13:50:03.600956 1157887 provision.go:177] copyRemoteCerts
	I0318 13:50:03.601028 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:03.601058 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.603986 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604437 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.604468 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604659 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.604922 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.605086 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.605260 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:03.688256 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 13:50:03.716748 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:03.744848 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:03.771601 1157887 provision.go:87] duration metric: took 300.358039ms to configureAuth
	I0318 13:50:03.771631 1157887 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:03.771893 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:03.771992 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.774410 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774725 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.774760 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774926 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.775099 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775456 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.775642 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.775872 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.775901 1157887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:04.068202 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:04.068242 1157887 machine.go:97] duration metric: took 951.160051ms to provisionDockerMachine
	I0318 13:50:04.068259 1157887 start.go:293] postStartSetup for "default-k8s-diff-port-569210" (driver="kvm2")
	I0318 13:50:04.068277 1157887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:04.068303 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.068677 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:04.068712 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.071619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.071974 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.072002 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.072148 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.072354 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.072519 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.072639 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.157469 1157887 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:04.162629 1157887 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:04.162655 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:04.162719 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:04.162810 1157887 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:04.162911 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:04.173898 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:04.204771 1157887 start.go:296] duration metric: took 136.495479ms for postStartSetup
	I0318 13:50:04.204814 1157887 fix.go:56] duration metric: took 20.554947186s for fixHost
	I0318 13:50:04.204839 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.207619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.207923 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.207951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.208088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.208296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208509 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208657 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.208801 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:04.208975 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:04.208988 1157887 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:04.309828 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769804.283357411
	
	I0318 13:50:04.309861 1157887 fix.go:216] guest clock: 1710769804.283357411
	I0318 13:50:04.309871 1157887 fix.go:229] Guest: 2024-03-18 13:50:04.283357411 +0000 UTC Remote: 2024-03-18 13:50:04.204818975 +0000 UTC m=+262.583280441 (delta=78.538436ms)
	I0318 13:50:04.309898 1157887 fix.go:200] guest clock delta is within tolerance: 78.538436ms
	I0318 13:50:04.309904 1157887 start.go:83] releasing machines lock for "default-k8s-diff-port-569210", held for 20.660081187s
	I0318 13:50:04.309933 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.310247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:04.313302 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313747 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.313777 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313956 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314591 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314792 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314878 1157887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:04.314934 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.315014 1157887 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:04.315059 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.318021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318056 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318438 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318474 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318500 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318518 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318879 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.318962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.319052 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319110 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319229 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.319286 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.426710 1157887 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:04.433482 1157887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:04.590331 1157887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:04.598896 1157887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:04.598974 1157887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:04.617060 1157887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:04.617095 1157887 start.go:494] detecting cgroup driver to use...
	I0318 13:50:04.617190 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:04.633902 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:04.648705 1157887 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:04.648759 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:04.665516 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:04.681326 1157887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:04.798310 1157887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:04.972066 1157887 docker.go:233] disabling docker service ...
	I0318 13:50:04.972133 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:04.995498 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:05.014901 1157887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:05.158158 1157887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:05.309791 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:05.324965 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:05.346489 1157887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:05.346595 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.358753 1157887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:05.358823 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.374416 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.394228 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.406975 1157887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:05.420201 1157887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:05.432405 1157887 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:05.432479 1157887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:05.449386 1157887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:05.461081 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:05.607102 1157887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:05.776152 1157887 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:05.776267 1157887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:05.782168 1157887 start.go:562] Will wait 60s for crictl version
	I0318 13:50:05.782247 1157887 ssh_runner.go:195] Run: which crictl
	I0318 13:50:05.787932 1157887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:05.831304 1157887 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:05.831399 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.865410 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.908406 1157887 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:05.909651 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:05.912855 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913213 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:05.913256 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913470 1157887 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:05.918362 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:05.933755 1157887 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:05.933926 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:05.934002 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:05.978920 1157887 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:05.978998 1157887 ssh_runner.go:195] Run: which lz4
	I0318 13:50:05.983751 1157887 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:05.988862 1157887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:05.988895 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:03.363591 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:03.864049 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.363310 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.863306 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.363706 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.863618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.364183 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.863776 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.863261 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.337631 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Start
	I0318 13:50:04.337838 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring networks are active...
	I0318 13:50:04.338615 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network default is active
	I0318 13:50:04.338978 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network mk-embed-certs-173036 is active
	I0318 13:50:04.339444 1157263 main.go:141] libmachine: (embed-certs-173036) Getting domain xml...
	I0318 13:50:04.340295 1157263 main.go:141] libmachine: (embed-certs-173036) Creating domain...
	I0318 13:50:05.616437 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting to get IP...
	I0318 13:50:05.617646 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.618096 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.618168 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.618075 1158806 retry.go:31] will retry after 234.69885ms: waiting for machine to come up
	I0318 13:50:05.854749 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.855365 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.855401 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.855310 1158806 retry.go:31] will retry after 324.015594ms: waiting for machine to come up
	I0318 13:50:06.181178 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.182089 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.182123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.182038 1158806 retry.go:31] will retry after 456.172304ms: waiting for machine to come up
	I0318 13:50:06.639827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.640288 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.640349 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.640244 1158806 retry.go:31] will retry after 561.082549ms: waiting for machine to come up
	I0318 13:50:07.203208 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.203798 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.203825 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.203696 1158806 retry.go:31] will retry after 633.905437ms: waiting for machine to come up
	I0318 13:50:07.839205 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.839760 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.839792 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.839698 1158806 retry.go:31] will retry after 629.254629ms: waiting for machine to come up
	I0318 13:50:08.470625 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:08.471073 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:08.471105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:08.471021 1158806 retry.go:31] will retry after 771.526268ms: waiting for machine to come up
	I0318 13:50:06.709604 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:09.208197 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:08.056220 1157887 crio.go:444] duration metric: took 2.072501191s to copy over tarball
	I0318 13:50:08.056361 1157887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:10.763501 1157887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.707101479s)
	I0318 13:50:10.763560 1157887 crio.go:451] duration metric: took 2.707303654s to extract the tarball
	I0318 13:50:10.763570 1157887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:10.808643 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:10.860178 1157887 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:10.860218 1157887 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:10.860229 1157887 kubeadm.go:928] updating node { 192.168.61.3 8444 v1.28.4 crio true true} ...
	I0318 13:50:10.860381 1157887 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-569210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:10.860455 1157887 ssh_runner.go:195] Run: crio config
	I0318 13:50:10.918077 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:10.918109 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:10.918124 1157887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:10.918154 1157887 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-569210 NodeName:default-k8s-diff-port-569210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:10.918362 1157887 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-569210"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:10.918457 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:10.930573 1157887 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:10.930639 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:10.941181 1157887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0318 13:50:10.960048 1157887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:10.980367 1157887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0318 13:50:11.001607 1157887 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:11.006363 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:11.020871 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:11.164152 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:11.185025 1157887 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210 for IP: 192.168.61.3
	I0318 13:50:11.185060 1157887 certs.go:194] generating shared ca certs ...
	I0318 13:50:11.185096 1157887 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:11.185277 1157887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:11.185342 1157887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:11.185356 1157887 certs.go:256] generating profile certs ...
	I0318 13:50:11.185464 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/client.key
	I0318 13:50:11.185541 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key.e15332a5
	I0318 13:50:11.185590 1157887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key
	I0318 13:50:11.185757 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:11.185799 1157887 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:11.185812 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:11.185841 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:11.185899 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:11.185945 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:11.185999 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:11.186853 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:11.221967 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:11.250180 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:11.287449 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:11.323521 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 13:50:11.360286 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:11.396947 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:11.426116 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:50:11.455183 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:11.483479 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:11.512975 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:11.548393 1157887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:11.569155 1157887 ssh_runner.go:195] Run: openssl version
	I0318 13:50:11.576084 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:11.589110 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594640 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594736 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.601473 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:11.615874 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:11.630380 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635808 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635886 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.644465 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:11.661509 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:08.364243 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:08.863539 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.364037 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.863422 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.363353 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.863485 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.363548 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.864070 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.243731 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:09.244146 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:09.244180 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:09.244104 1158806 retry.go:31] will retry after 1.160252016s: waiting for machine to come up
	I0318 13:50:10.405805 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:10.406270 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:10.406310 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:10.406201 1158806 retry.go:31] will retry after 1.625913099s: waiting for machine to come up
	I0318 13:50:12.033202 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:12.033674 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:12.033712 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:12.033589 1158806 retry.go:31] will retry after 1.835793865s: waiting for machine to come up
	I0318 13:50:11.211241 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:13.710211 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:11.675340 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938009 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938089 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.944766 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:11.957959 1157887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:11.963524 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:11.971678 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:11.978601 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:11.985403 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:11.992159 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:11.998620 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:12.005209 1157887 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:12.005300 1157887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:12.005350 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.074518 1157887 cri.go:89] found id: ""
	I0318 13:50:12.074603 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:12.099031 1157887 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:12.099062 1157887 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:12.099070 1157887 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:12.099147 1157887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:12.111133 1157887 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:12.112779 1157887 kubeconfig.go:125] found "default-k8s-diff-port-569210" server: "https://192.168.61.3:8444"
	I0318 13:50:12.116521 1157887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:12.134902 1157887 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.3
	I0318 13:50:12.134964 1157887 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:12.135005 1157887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:12.135086 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.190100 1157887 cri.go:89] found id: ""
	I0318 13:50:12.190182 1157887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:12.211556 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:12.223095 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:12.223120 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:12.223173 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:50:12.235709 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:12.235780 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:12.248896 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:50:12.260212 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:12.260285 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:12.271424 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.283083 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:12.283143 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.294877 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:50:12.305629 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:12.305692 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:12.317395 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:12.328943 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:12.471901 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.400723 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.601149 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.677768 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.796413 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:13.796558 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.297639 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.797236 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.885767 1157887 api_server.go:72] duration metric: took 1.089353166s to wait for apiserver process to appear ...
	I0318 13:50:14.885801 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:14.885827 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:14.886464 1157887 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0318 13:50:15.386913 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:13.364111 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.863871 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.363958 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.863570 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.364185 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.863974 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.364010 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.863484 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.864149 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.871003 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:13.871443 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:13.871475 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:13.871398 1158806 retry.go:31] will retry after 2.53403994s: waiting for machine to come up
	I0318 13:50:16.407271 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:16.407728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:16.407775 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:16.407708 1158806 retry.go:31] will retry after 2.371916928s: waiting for machine to come up
	I0318 13:50:18.781468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:18.781866 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:18.781898 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:18.781809 1158806 retry.go:31] will retry after 3.250042198s: waiting for machine to come up
	I0318 13:50:17.204788 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.204828 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.204848 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.235957 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.236000 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.386349 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.393185 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.393220 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:17.886583 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.892087 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.892122 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.386820 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.406609 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:18.406658 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.886458 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.896097 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:50:18.905565 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:18.905603 1157887 api_server.go:131] duration metric: took 4.019792975s to wait for apiserver health ...
	I0318 13:50:18.905615 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:18.905624 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:18.907258 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:15.711910 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.209648 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.909133 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:18.944457 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:18.973831 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:18.984400 1157887 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:18.984436 1157887 system_pods.go:61] "coredns-5dd5756b68-hwsz5" [0a91f20c-3d3b-415c-b709-7898c606d830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:18.984447 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [64925324-9666-49ab-b849-ad9b7ce54891] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:18.984456 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [8409a63f-fbac-4bf9-b54b-5ac267a58206] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:18.984465 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [a2d7b983-c4aa-4c32-9391-babe90b0f102] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:18.984470 1157887 system_pods.go:61] "kube-proxy-v59ks" [39a4e73c-319d-4093-8781-ca7a1a48e005] Running
	I0318 13:50:18.984477 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [f24baa89-e33d-42ca-8f83-17c76a4cedcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:18.984488 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-2sb4m" [f3e533a7-9666-4b79-b9a9-26222422f242] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:18.984496 1157887 system_pods.go:61] "storage-provisioner" [864d0bb2-cbca-41ae-b9ec-89aced62dd08] Running
	I0318 13:50:18.984505 1157887 system_pods.go:74] duration metric: took 10.646849ms to wait for pod list to return data ...
	I0318 13:50:18.984519 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:18.989173 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:18.989201 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:18.989213 1157887 node_conditions.go:105] duration metric: took 4.685756ms to run NodePressure ...
	I0318 13:50:18.989231 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:19.229166 1157887 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237757 1157887 kubeadm.go:733] kubelet initialised
	I0318 13:50:19.237787 1157887 kubeadm.go:734] duration metric: took 8.591388ms waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237797 1157887 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:19.243530 1157887 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.253925 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253957 1157887 pod_ready.go:81] duration metric: took 10.403116ms for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.253969 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253978 1157887 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.265167 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265189 1157887 pod_ready.go:81] duration metric: took 11.202545ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.265200 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265206 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.273558 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273589 1157887 pod_ready.go:81] duration metric: took 8.37478ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.273603 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273615 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:21.280970 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.363366 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:18.863782 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.363987 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.863437 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.364050 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.863961 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.364126 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.863264 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.363519 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.033540 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:22.034056 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:22.034084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:22.034001 1158806 retry.go:31] will retry after 5.297432528s: waiting for machine to come up
	I0318 13:50:20.708189 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:22.708573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:24.708632 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.281625 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:25.780754 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.364019 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:23.864134 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.363510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.863263 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.364027 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.863203 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.364219 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.863262 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.363889 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.864113 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.335390 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335875 1157263 main.go:141] libmachine: (embed-certs-173036) Found IP for machine: 192.168.50.191
	I0318 13:50:27.335908 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has current primary IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335918 1157263 main.go:141] libmachine: (embed-certs-173036) Reserving static IP address...
	I0318 13:50:27.336311 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.336360 1157263 main.go:141] libmachine: (embed-certs-173036) Reserved static IP address: 192.168.50.191
	I0318 13:50:27.336380 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | skip adding static IP to network mk-embed-certs-173036 - found existing host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"}
	I0318 13:50:27.336394 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Getting to WaitForSSH function...
	I0318 13:50:27.336406 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting for SSH to be available...
	I0318 13:50:27.338627 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.338948 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.338983 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.339087 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH client type: external
	I0318 13:50:27.339177 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa (-rw-------)
	I0318 13:50:27.339212 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:27.339227 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | About to run SSH command:
	I0318 13:50:27.339244 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | exit 0
	I0318 13:50:27.468468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:27.468936 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetConfigRaw
	I0318 13:50:27.469699 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.472098 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472422 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.472446 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472714 1157263 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/config.json ...
	I0318 13:50:27.472955 1157263 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:27.472982 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:27.473196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.475516 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.475808 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.475831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.476041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.476252 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476414 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476537 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.476719 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.476899 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.476909 1157263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:27.589501 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:27.589532 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.589828 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:50:27.589862 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.590068 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.592650 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593005 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.593035 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593186 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.593375 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593546 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593713 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.593883 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.594058 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.594073 1157263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-173036 && echo "embed-certs-173036" | sudo tee /etc/hostname
	I0318 13:50:27.730406 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173036
	
	I0318 13:50:27.730437 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.733420 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.733857 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.733890 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.734058 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.734271 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734475 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734609 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.734764 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.734943 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.734960 1157263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-173036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-173036/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-173036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:27.860625 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:27.860679 1157263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:27.860777 1157263 buildroot.go:174] setting up certificates
	I0318 13:50:27.860790 1157263 provision.go:84] configureAuth start
	I0318 13:50:27.860810 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.861112 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.864215 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864667 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.864703 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864956 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.867381 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867690 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.867730 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867893 1157263 provision.go:143] copyHostCerts
	I0318 13:50:27.867963 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:27.867977 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:27.868048 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:27.868183 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:27.868198 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:27.868231 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:27.868307 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:27.868318 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:27.868372 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:27.868451 1157263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.embed-certs-173036 san=[127.0.0.1 192.168.50.191 embed-certs-173036 localhost minikube]
	I0318 13:50:28.001671 1157263 provision.go:177] copyRemoteCerts
	I0318 13:50:28.001742 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:28.001773 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.004389 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.004746 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.004777 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.005021 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.005214 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.005393 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.005558 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.095871 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 13:50:28.127356 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:28.157301 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:28.186185 1157263 provision.go:87] duration metric: took 325.374328ms to configureAuth
	I0318 13:50:28.186217 1157263 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:28.186424 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:28.186529 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.189135 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189532 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.189564 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189719 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.189933 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190127 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.190492 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.190654 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.190668 1157263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:28.473836 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:28.473875 1157263 machine.go:97] duration metric: took 1.000902962s to provisionDockerMachine
	I0318 13:50:28.473887 1157263 start.go:293] postStartSetup for "embed-certs-173036" (driver="kvm2")
	I0318 13:50:28.473898 1157263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:28.473914 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.474270 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:28.474307 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.477165 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477571 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.477619 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477756 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.477966 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.478135 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.478296 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.568988 1157263 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:28.573759 1157263 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:28.573782 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:28.573839 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:28.573909 1157263 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:28.573989 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:28.584049 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:28.610999 1157263 start.go:296] duration metric: took 137.09711ms for postStartSetup
	I0318 13:50:28.611043 1157263 fix.go:56] duration metric: took 24.300980779s for fixHost
	I0318 13:50:28.611066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.614123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614582 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.614628 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614795 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.614999 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615124 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615255 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.615427 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.615617 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.615631 1157263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:28.729856 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769828.678644307
	
	I0318 13:50:28.729894 1157263 fix.go:216] guest clock: 1710769828.678644307
	I0318 13:50:28.729913 1157263 fix.go:229] Guest: 2024-03-18 13:50:28.678644307 +0000 UTC Remote: 2024-03-18 13:50:28.611048079 +0000 UTC m=+364.845703282 (delta=67.596228ms)
	I0318 13:50:28.729932 1157263 fix.go:200] guest clock delta is within tolerance: 67.596228ms
	I0318 13:50:28.729937 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 24.419922158s
	I0318 13:50:28.729958 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.730241 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:28.732831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733196 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.733249 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733406 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.733875 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734172 1157263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:28.734248 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.734330 1157263 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:28.734376 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.737014 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737200 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737444 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737470 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737611 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737694 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737721 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737918 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737926 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738195 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738292 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.738357 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738466 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:26.708824 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.209974 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:28.818695 1157263 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:28.844173 1157263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:28.995017 1157263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:29.002150 1157263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:29.002251 1157263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:29.021165 1157263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:29.021200 1157263 start.go:494] detecting cgroup driver to use...
	I0318 13:50:29.021286 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:29.039060 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:29.053451 1157263 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:29.053521 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:29.069721 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:29.085285 1157263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:29.201273 1157263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:29.356314 1157263 docker.go:233] disabling docker service ...
	I0318 13:50:29.356406 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:29.374159 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:29.390280 1157263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:29.542126 1157263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:29.692068 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:29.707760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:29.735684 1157263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:29.735753 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.751291 1157263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:29.751365 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.763159 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.774837 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.787142 1157263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:29.799773 1157263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:29.810620 1157263 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:29.810691 1157263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:29.826816 1157263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:29.842059 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:29.985531 1157263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:30.147122 1157263 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:30.147191 1157263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:30.152406 1157263 start.go:562] Will wait 60s for crictl version
	I0318 13:50:30.152468 1157263 ssh_runner.go:195] Run: which crictl
	I0318 13:50:30.157019 1157263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:30.199810 1157263 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:30.199889 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.232028 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.270484 1157263 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:27.781584 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.795969 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:31.282868 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.282899 1157887 pod_ready.go:81] duration metric: took 12.009270978s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.282910 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290886 1157887 pod_ready.go:92] pod "kube-proxy-v59ks" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.290917 1157887 pod_ready.go:81] duration metric: took 7.99936ms for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290931 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300197 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.300235 1157887 pod_ready.go:81] duration metric: took 9.294232ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300254 1157887 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:28.364069 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:28.863405 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.363996 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.863574 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.363749 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.863564 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.363250 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.863320 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.363894 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.864166 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.271939 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:30.275084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.275682 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:30.275728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.276045 1157263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:30.282421 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:30.299013 1157263 kubeadm.go:877] updating cluster {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:30.299280 1157263 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:30.299364 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:30.349617 1157263 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:30.349720 1157263 ssh_runner.go:195] Run: which lz4
	I0318 13:50:30.354659 1157263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:30.359861 1157263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:30.359903 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:32.362707 1157263 crio.go:444] duration metric: took 2.008087158s to copy over tarball
	I0318 13:50:32.362796 1157263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:31.210766 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.709661 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.308081 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:35.309291 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:33.864021 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.363963 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.864011 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.364122 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.863559 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.364154 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.364232 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.863934 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.265803 1157263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.902966349s)
	I0318 13:50:35.265827 1157263 crio.go:451] duration metric: took 2.903086385s to extract the tarball
	I0318 13:50:35.265835 1157263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:35.313875 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:35.378361 1157263 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:35.378392 1157263 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:35.378408 1157263 kubeadm.go:928] updating node { 192.168.50.191 8443 v1.28.4 crio true true} ...
	I0318 13:50:35.378551 1157263 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-173036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:35.378648 1157263 ssh_runner.go:195] Run: crio config
	I0318 13:50:35.443472 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:35.443501 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:35.443520 1157263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:35.443551 1157263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.191 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-173036 NodeName:embed-certs-173036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:35.443730 1157263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-173036"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:35.443809 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:35.455284 1157263 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:35.455352 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:35.465886 1157263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 13:50:35.487345 1157263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:35.507361 1157263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 13:50:35.528055 1157263 ssh_runner.go:195] Run: grep 192.168.50.191	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:35.533287 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:35.548295 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:35.684165 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:35.703884 1157263 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036 for IP: 192.168.50.191
	I0318 13:50:35.703910 1157263 certs.go:194] generating shared ca certs ...
	I0318 13:50:35.703927 1157263 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:35.704117 1157263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:35.704186 1157263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:35.704200 1157263 certs.go:256] generating profile certs ...
	I0318 13:50:35.704292 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/client.key
	I0318 13:50:35.704406 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key.527b6b30
	I0318 13:50:35.704472 1157263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key
	I0318 13:50:35.704637 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:35.704680 1157263 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:35.704694 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:35.704729 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:35.704763 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:35.704796 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:35.704857 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:35.705836 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:35.768912 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:35.830564 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:35.877813 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:35.916756 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 13:50:35.948397 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:35.980450 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:36.009626 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:50:36.040155 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:36.068885 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:36.098638 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:36.128423 1157263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:36.149584 1157263 ssh_runner.go:195] Run: openssl version
	I0318 13:50:36.156347 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:36.169729 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175367 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175438 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.181995 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:36.193987 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:36.206444 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212355 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212442 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.219042 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:36.231882 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:36.244590 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250443 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250511 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.257713 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:36.271026 1157263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:36.276902 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:36.285465 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:36.294274 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:36.302415 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:36.310867 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:36.318931 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:36.327627 1157263 kubeadm.go:391] StartCluster: {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:36.327781 1157263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:36.327843 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.376644 1157263 cri.go:89] found id: ""
	I0318 13:50:36.376741 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:36.389506 1157263 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:36.389528 1157263 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:36.389533 1157263 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:36.389640 1157263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:36.401386 1157263 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:36.402631 1157263 kubeconfig.go:125] found "embed-certs-173036" server: "https://192.168.50.191:8443"
	I0318 13:50:36.404833 1157263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:36.416975 1157263 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.191
	I0318 13:50:36.417026 1157263 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:36.417041 1157263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:36.417106 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.458072 1157263 cri.go:89] found id: ""
	I0318 13:50:36.458162 1157263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:36.476557 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:36.487765 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:36.487791 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:36.487857 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:50:36.498903 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:36.498982 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:36.510205 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:50:36.520423 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:36.520476 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:36.531864 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.542058 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:36.542131 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.552807 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:50:36.562840 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:36.562915 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:36.573581 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:36.583760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:36.719884 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.681007 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.914386 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.993967 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:38.101144 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:38.101261 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.602138 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.711725 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:37.807508 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:39.809153 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.363994 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.863278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.363665 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.863948 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.364081 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.864124 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.363964 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.863593 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.363750 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.864002 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.102040 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.212769 1157263 api_server.go:72] duration metric: took 1.111626123s to wait for apiserver process to appear ...
	I0318 13:50:39.212807 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:39.212840 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:39.213446 1157263 api_server.go:269] stopped: https://192.168.50.191:8443/healthz: Get "https://192.168.50.191:8443/healthz": dial tcp 192.168.50.191:8443: connect: connection refused
	I0318 13:50:39.713482 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.646306 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.646352 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.646370 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.691920 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.691953 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.713082 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.770065 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:42.770101 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.213524 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.224669 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.224710 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.712987 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.718490 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.718533 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:44.213026 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:44.217876 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:50:44.225562 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:44.225588 1157263 api_server.go:131] duration metric: took 5.012774227s to wait for apiserver health ...
	I0318 13:50:44.225610 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:44.225618 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:44.227565 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:40.210029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:42.210435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:44.710674 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:41.811414 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.818645 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:46.308757 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.364189 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:43.863868 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.363454 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.863940 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.363913 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.863288 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.363884 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.863361 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.363383 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.864064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.229055 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:44.260389 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:44.310001 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:44.327281 1157263 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:44.327330 1157263 system_pods.go:61] "coredns-5dd5756b68-zsfvm" [1404c3fe-6538-4aaf-80f5-599275240731] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:44.327342 1157263 system_pods.go:61] "etcd-embed-certs-173036" [254a577c-bd3b-4645-9c92-1479b0c6d0c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:44.327354 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [5a738280-05ba-413e-a288-4c4d07ddbd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:44.327362 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [f48cfb7f-1efe-4941-b328-2358c7a5cced] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:44.327369 1157263 system_pods.go:61] "kube-proxy-xqf68" [969de4e5-fc60-4d46-b336-49f22a9b6c38] Running
	I0318 13:50:44.327376 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [e0579c16-de3e-4915-9ed2-f69b53f6f884] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:44.327385 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-5cv2z" [85649bfb-f91f-4bfe-9356-d540ac3d6a68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:44.327392 1157263 system_pods.go:61] "storage-provisioner" [0c1ec131-0f6c-4e01-aaec-5011f1a4fe75] Running
	I0318 13:50:44.327410 1157263 system_pods.go:74] duration metric: took 17.376754ms to wait for pod list to return data ...
	I0318 13:50:44.327423 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:44.332965 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:44.332997 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:44.333008 1157263 node_conditions.go:105] duration metric: took 5.580934ms to run NodePressure ...
	I0318 13:50:44.333027 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:44.573923 1157263 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578504 1157263 kubeadm.go:733] kubelet initialised
	I0318 13:50:44.578526 1157263 kubeadm.go:734] duration metric: took 4.577181ms waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578534 1157263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:44.584361 1157263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.591714 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591739 1157263 pod_ready.go:81] duration metric: took 7.35191ms for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.591746 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591753 1157263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.597618 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597641 1157263 pod_ready.go:81] duration metric: took 5.880276ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.597649 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597655 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.604124 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604148 1157263 pod_ready.go:81] duration metric: took 6.484251ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.604157 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604164 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:46.611326 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:47.209538 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:49.708718 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.309157 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.808340 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.363218 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:48.864086 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.363457 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.863292 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.363308 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.863428 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.363583 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.863562 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.363995 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.863463 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.111834 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.114329 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.114356 1157263 pod_ready.go:81] duration metric: took 5.510175425s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.114369 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133169 1157263 pod_ready.go:92] pod "kube-proxy-xqf68" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.133196 1157263 pod_ready.go:81] duration metric: took 18.819059ms for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133208 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:52.144639 1157263 pod_ready.go:102] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:51.709823 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:54.207738 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.311033 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:55.311439 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.363919 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:53.863936 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.363671 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.863567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:54.863709 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:54.911905 1157708 cri.go:89] found id: ""
	I0318 13:50:54.911942 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.911954 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:54.911962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:54.912031 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:54.962141 1157708 cri.go:89] found id: ""
	I0318 13:50:54.962170 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.962182 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:54.962188 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:54.962269 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:55.001597 1157708 cri.go:89] found id: ""
	I0318 13:50:55.001639 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.001652 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:55.001660 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:55.001725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:55.042660 1157708 cri.go:89] found id: ""
	I0318 13:50:55.042695 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.042708 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:55.042716 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:55.042775 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:55.082095 1157708 cri.go:89] found id: ""
	I0318 13:50:55.082128 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.082139 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:55.082146 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:55.082211 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:55.120938 1157708 cri.go:89] found id: ""
	I0318 13:50:55.120969 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.121000 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:55.121008 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:55.121081 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:55.159247 1157708 cri.go:89] found id: ""
	I0318 13:50:55.159280 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.159292 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:55.159300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:55.159366 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:55.200130 1157708 cri.go:89] found id: ""
	I0318 13:50:55.200161 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.200170 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:55.200180 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:55.200193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:55.254113 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:55.254154 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:55.268984 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:55.269027 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:55.402079 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:55.402106 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:55.402123 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:55.468627 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:55.468674 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:54.143220 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:54.143247 1157263 pod_ready.go:81] duration metric: took 4.010031997s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:54.143258 1157263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:56.151615 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.650293 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:56.208339 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.209144 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:57.810894 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.308972 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.016860 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:58.031684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:58.031747 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:58.073389 1157708 cri.go:89] found id: ""
	I0318 13:50:58.073415 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.073427 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:58.073434 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:58.073497 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:58.114439 1157708 cri.go:89] found id: ""
	I0318 13:50:58.114471 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.114483 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:58.114490 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:58.114553 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:58.165440 1157708 cri.go:89] found id: ""
	I0318 13:50:58.165466 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.165476 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:58.165484 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:58.165569 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:58.207083 1157708 cri.go:89] found id: ""
	I0318 13:50:58.207117 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.207129 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:58.207137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:58.207227 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:58.252945 1157708 cri.go:89] found id: ""
	I0318 13:50:58.252973 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.252985 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:58.252993 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:58.253055 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:58.292437 1157708 cri.go:89] found id: ""
	I0318 13:50:58.292464 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.292474 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:58.292480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:58.292530 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:58.335359 1157708 cri.go:89] found id: ""
	I0318 13:50:58.335403 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.335415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:58.335423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:58.335511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:58.381434 1157708 cri.go:89] found id: ""
	I0318 13:50:58.381473 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.381484 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:58.381494 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:58.381511 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:58.432270 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:58.432319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:58.447658 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:58.447686 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:58.523163 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:58.523186 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:58.523207 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:58.599544 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:58.599586 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:01.141653 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:01.156996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:01.157070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:01.192720 1157708 cri.go:89] found id: ""
	I0318 13:51:01.192762 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.192775 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:01.192785 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:01.192866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:01.232678 1157708 cri.go:89] found id: ""
	I0318 13:51:01.232705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.232716 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:01.232723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:01.232795 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:01.270637 1157708 cri.go:89] found id: ""
	I0318 13:51:01.270666 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.270676 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:01.270684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:01.270746 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:01.308891 1157708 cri.go:89] found id: ""
	I0318 13:51:01.308921 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.308931 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:01.308939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:01.309003 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:01.349301 1157708 cri.go:89] found id: ""
	I0318 13:51:01.349334 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.349346 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:01.349354 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:01.349420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:01.394010 1157708 cri.go:89] found id: ""
	I0318 13:51:01.394039 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.394047 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:01.394053 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:01.394103 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:01.432778 1157708 cri.go:89] found id: ""
	I0318 13:51:01.432804 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.432815 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:01.432823 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:01.432886 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:01.471974 1157708 cri.go:89] found id: ""
	I0318 13:51:01.472002 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.472011 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:01.472022 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:01.472040 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:01.524855 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:01.524893 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:01.540939 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:01.540967 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:01.618318 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:01.618350 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:01.618367 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:01.695717 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:01.695755 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:00.650906 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.651512 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.211620 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.708336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.312320 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.808301 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.241781 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:04.256276 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:04.256373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:04.297129 1157708 cri.go:89] found id: ""
	I0318 13:51:04.297158 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.297170 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:04.297179 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:04.297247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:04.341743 1157708 cri.go:89] found id: ""
	I0318 13:51:04.341774 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.341786 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:04.341793 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:04.341858 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:04.384400 1157708 cri.go:89] found id: ""
	I0318 13:51:04.384434 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.384445 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:04.384453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:04.384510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:04.425459 1157708 cri.go:89] found id: ""
	I0318 13:51:04.425487 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.425500 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:04.425510 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:04.425563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:04.463091 1157708 cri.go:89] found id: ""
	I0318 13:51:04.463125 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.463137 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:04.463145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:04.463210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:04.503023 1157708 cri.go:89] found id: ""
	I0318 13:51:04.503057 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.503069 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:04.503077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:04.503141 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:04.542083 1157708 cri.go:89] found id: ""
	I0318 13:51:04.542116 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.542127 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:04.542136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:04.542207 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:04.583097 1157708 cri.go:89] found id: ""
	I0318 13:51:04.583128 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.583137 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:04.583146 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:04.583161 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:04.650476 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:04.650518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:04.706073 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:04.706111 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:04.723595 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:04.723628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:04.800278 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:04.800301 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:04.800316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:07.388144 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:07.403636 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:07.403711 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:07.443337 1157708 cri.go:89] found id: ""
	I0318 13:51:07.443365 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.443379 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:07.443386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:07.443442 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:07.482417 1157708 cri.go:89] found id: ""
	I0318 13:51:07.482453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.482462 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:07.482469 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:07.482521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:07.518445 1157708 cri.go:89] found id: ""
	I0318 13:51:07.518474 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.518485 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:07.518493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:07.518563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:07.555628 1157708 cri.go:89] found id: ""
	I0318 13:51:07.555661 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.555673 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:07.555681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:07.555760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:07.593805 1157708 cri.go:89] found id: ""
	I0318 13:51:07.593842 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.593856 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:07.593873 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:07.593936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:07.638206 1157708 cri.go:89] found id: ""
	I0318 13:51:07.638234 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.638242 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:07.638249 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:07.638313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:07.679526 1157708 cri.go:89] found id: ""
	I0318 13:51:07.679561 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.679573 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:07.679581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:07.679635 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:07.724468 1157708 cri.go:89] found id: ""
	I0318 13:51:07.724494 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.724504 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:07.724516 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:07.724533 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:07.766491 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:07.766522 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:07.823782 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:07.823833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:07.839316 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:07.839342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:07.924790 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:07.924821 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:07.924841 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:05.151629 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.651485 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:05.210455 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.709381 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.310000 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:09.808337 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.513618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:10.528711 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:10.528790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:10.571217 1157708 cri.go:89] found id: ""
	I0318 13:51:10.571254 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.571267 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:10.571275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:10.571335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:10.608096 1157708 cri.go:89] found id: ""
	I0318 13:51:10.608129 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.608140 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:10.608149 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:10.608217 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:10.649245 1157708 cri.go:89] found id: ""
	I0318 13:51:10.649274 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.649283 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:10.649290 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:10.649365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:10.693462 1157708 cri.go:89] found id: ""
	I0318 13:51:10.693495 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.693506 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:10.693515 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:10.693589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:10.740434 1157708 cri.go:89] found id: ""
	I0318 13:51:10.740464 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.740474 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:10.740480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:10.740543 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:10.781062 1157708 cri.go:89] found id: ""
	I0318 13:51:10.781099 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.781108 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:10.781114 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:10.781167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:10.828480 1157708 cri.go:89] found id: ""
	I0318 13:51:10.828513 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.828524 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:10.828532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:10.828605 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:10.868508 1157708 cri.go:89] found id: ""
	I0318 13:51:10.868535 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.868543 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:10.868553 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:10.868565 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:10.923925 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:10.923961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:10.939254 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:10.939283 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:11.031307 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:11.031334 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:11.031351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:11.121563 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:11.121618 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:10.151278 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.650083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.209877 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.709070 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.308084 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:14.309651 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:16.312985 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:13.681147 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:13.696705 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:13.696812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:13.740904 1157708 cri.go:89] found id: ""
	I0318 13:51:13.740937 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.740949 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:13.740957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:13.741038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:13.779625 1157708 cri.go:89] found id: ""
	I0318 13:51:13.779659 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.779672 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:13.779681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:13.779762 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:13.822183 1157708 cri.go:89] found id: ""
	I0318 13:51:13.822218 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.822231 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:13.822239 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:13.822302 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:13.873686 1157708 cri.go:89] found id: ""
	I0318 13:51:13.873728 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.873741 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:13.873749 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:13.873821 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:13.919772 1157708 cri.go:89] found id: ""
	I0318 13:51:13.919802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.919811 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:13.919817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:13.919874 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:13.958809 1157708 cri.go:89] found id: ""
	I0318 13:51:13.958837 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.958846 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:13.958852 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:13.958928 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:14.000537 1157708 cri.go:89] found id: ""
	I0318 13:51:14.000568 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.000580 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:14.000588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:14.000638 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:14.041234 1157708 cri.go:89] found id: ""
	I0318 13:51:14.041265 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.041275 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:14.041285 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:14.041299 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:14.085435 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:14.085462 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:14.144336 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:14.144374 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:14.159972 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:14.160000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:14.242027 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:14.242048 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:14.242061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:16.821805 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:16.840202 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:16.840272 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:16.898088 1157708 cri.go:89] found id: ""
	I0318 13:51:16.898120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.898129 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:16.898135 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:16.898203 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:16.953180 1157708 cri.go:89] found id: ""
	I0318 13:51:16.953209 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.953221 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:16.953229 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:16.953288 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:17.006995 1157708 cri.go:89] found id: ""
	I0318 13:51:17.007048 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.007062 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:17.007070 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:17.007136 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:17.049756 1157708 cri.go:89] found id: ""
	I0318 13:51:17.049798 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.049809 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:17.049817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:17.049885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:17.092026 1157708 cri.go:89] found id: ""
	I0318 13:51:17.092055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.092066 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:17.092074 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:17.092144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:17.137722 1157708 cri.go:89] found id: ""
	I0318 13:51:17.137756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.137769 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:17.137778 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:17.137875 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:17.180778 1157708 cri.go:89] found id: ""
	I0318 13:51:17.180808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.180816 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:17.180822 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:17.180885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:17.227629 1157708 cri.go:89] found id: ""
	I0318 13:51:17.227664 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.227675 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:17.227688 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:17.227706 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:17.272559 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:17.272588 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:17.333953 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:17.333994 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:17.349765 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:17.349793 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:17.434436 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:17.434465 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:17.434483 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:14.650201 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.151069 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:15.208570 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.210168 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:19.707753 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:18.808252 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.309389 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:20.014314 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:20.031106 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:20.031172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:20.067727 1157708 cri.go:89] found id: ""
	I0318 13:51:20.067753 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.067765 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:20.067773 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:20.067844 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:20.108455 1157708 cri.go:89] found id: ""
	I0318 13:51:20.108482 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.108491 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:20.108497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:20.108563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:20.152257 1157708 cri.go:89] found id: ""
	I0318 13:51:20.152285 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.152310 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:20.152317 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:20.152394 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:20.191480 1157708 cri.go:89] found id: ""
	I0318 13:51:20.191509 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.191520 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:20.191529 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:20.191599 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:20.235677 1157708 cri.go:89] found id: ""
	I0318 13:51:20.235705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.235716 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:20.235723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:20.235796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:20.274794 1157708 cri.go:89] found id: ""
	I0318 13:51:20.274822 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.274833 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:20.274842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:20.274907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:20.321987 1157708 cri.go:89] found id: ""
	I0318 13:51:20.322019 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.322031 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:20.322040 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:20.322097 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:20.361292 1157708 cri.go:89] found id: ""
	I0318 13:51:20.361319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.361328 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:20.361338 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:20.361360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:20.434481 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:20.434509 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:20.434527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:20.518203 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:20.518244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:20.560241 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:20.560271 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:20.615489 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:20.615526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:19.151244 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.151320 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.651849 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.708423 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:24.207976 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.310491 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:25.808443 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.132509 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:23.146447 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:23.146559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:23.189576 1157708 cri.go:89] found id: ""
	I0318 13:51:23.189613 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.189625 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:23.189634 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:23.189688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:23.229700 1157708 cri.go:89] found id: ""
	I0318 13:51:23.229731 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.229740 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:23.229747 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:23.229812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:23.272713 1157708 cri.go:89] found id: ""
	I0318 13:51:23.272747 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.272759 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:23.272768 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:23.272834 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:23.313988 1157708 cri.go:89] found id: ""
	I0318 13:51:23.314014 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.314022 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:23.314028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:23.314087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:23.360195 1157708 cri.go:89] found id: ""
	I0318 13:51:23.360230 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.360243 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:23.360251 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:23.360321 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:23.400657 1157708 cri.go:89] found id: ""
	I0318 13:51:23.400685 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.400694 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:23.400707 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:23.400760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:23.442841 1157708 cri.go:89] found id: ""
	I0318 13:51:23.442873 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.442893 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:23.442900 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:23.442970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:23.483467 1157708 cri.go:89] found id: ""
	I0318 13:51:23.483504 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.483516 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:23.483528 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:23.483545 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:23.538581 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:23.538616 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:23.555392 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:23.555421 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:23.634919 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:23.634945 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:23.634970 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:23.718098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:23.718144 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.270369 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:26.287165 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:26.287232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:26.331773 1157708 cri.go:89] found id: ""
	I0318 13:51:26.331807 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.331832 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:26.331850 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:26.331923 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:26.372067 1157708 cri.go:89] found id: ""
	I0318 13:51:26.372095 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.372102 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:26.372109 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:26.372182 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:26.411883 1157708 cri.go:89] found id: ""
	I0318 13:51:26.411910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.411919 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:26.411924 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:26.411980 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:26.449087 1157708 cri.go:89] found id: ""
	I0318 13:51:26.449122 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.449131 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:26.449137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:26.449188 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:26.492126 1157708 cri.go:89] found id: ""
	I0318 13:51:26.492162 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.492174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:26.492182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:26.492251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:26.529621 1157708 cri.go:89] found id: ""
	I0318 13:51:26.529656 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.529668 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:26.529677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:26.529764 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:26.568853 1157708 cri.go:89] found id: ""
	I0318 13:51:26.568888 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.568899 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:26.568907 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:26.568979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:26.607882 1157708 cri.go:89] found id: ""
	I0318 13:51:26.607917 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.607929 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:26.607942 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:26.607959 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.648736 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:26.648768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:26.704641 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:26.704684 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:26.720681 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:26.720715 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:26.799577 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:26.799608 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:26.799627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:26.152083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.651445 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:26.208160 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.708468 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.309859 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.806690 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:29.389391 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:29.404122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:29.404195 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:29.446761 1157708 cri.go:89] found id: ""
	I0318 13:51:29.446787 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.446796 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:29.446803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:29.446857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:29.483974 1157708 cri.go:89] found id: ""
	I0318 13:51:29.484007 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.484020 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:29.484028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:29.484099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:29.521894 1157708 cri.go:89] found id: ""
	I0318 13:51:29.521922 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.521931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:29.521937 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:29.521993 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:29.562918 1157708 cri.go:89] found id: ""
	I0318 13:51:29.562948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.562957 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:29.562963 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:29.563017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:29.600372 1157708 cri.go:89] found id: ""
	I0318 13:51:29.600412 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.600424 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:29.600432 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:29.600500 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:29.638902 1157708 cri.go:89] found id: ""
	I0318 13:51:29.638933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.638945 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:29.638953 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:29.639019 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:29.679041 1157708 cri.go:89] found id: ""
	I0318 13:51:29.679071 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.679079 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:29.679085 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:29.679142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:29.719168 1157708 cri.go:89] found id: ""
	I0318 13:51:29.719201 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.719213 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:29.719224 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:29.719244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:29.764050 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:29.764077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:29.822136 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:29.822174 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:29.839485 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:29.839515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:29.914984 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:29.915006 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:29.915023 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:32.497388 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:32.512151 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:32.512215 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:32.549566 1157708 cri.go:89] found id: ""
	I0318 13:51:32.549602 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.549614 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:32.549623 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:32.549693 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:32.588516 1157708 cri.go:89] found id: ""
	I0318 13:51:32.588546 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.588555 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:32.588562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:32.588615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:32.628425 1157708 cri.go:89] found id: ""
	I0318 13:51:32.628453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.628462 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:32.628470 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:32.628546 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:32.670851 1157708 cri.go:89] found id: ""
	I0318 13:51:32.670874 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.670888 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:32.670895 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:32.670944 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:32.709614 1157708 cri.go:89] found id: ""
	I0318 13:51:32.709642 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.709656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:32.709666 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:32.709738 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:32.749774 1157708 cri.go:89] found id: ""
	I0318 13:51:32.749808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.749819 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:32.749828 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:32.749896 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:32.789502 1157708 cri.go:89] found id: ""
	I0318 13:51:32.789525 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.789534 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:32.789540 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:32.789589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:32.834926 1157708 cri.go:89] found id: ""
	I0318 13:51:32.834948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.834956 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:32.834965 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:32.834980 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:32.887365 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:32.887404 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:32.903584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:32.903610 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:32.978924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:32.978958 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:32.978988 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:31.151276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.651395 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.709136 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.709549 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.808076 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.308827 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.055386 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:33.055424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:35.603881 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:35.618083 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:35.618167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:35.659760 1157708 cri.go:89] found id: ""
	I0318 13:51:35.659802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.659814 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:35.659820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:35.659881 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:35.703521 1157708 cri.go:89] found id: ""
	I0318 13:51:35.703570 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.703582 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:35.703589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:35.703651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:35.744411 1157708 cri.go:89] found id: ""
	I0318 13:51:35.744444 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.744455 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:35.744463 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:35.744548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:35.783704 1157708 cri.go:89] found id: ""
	I0318 13:51:35.783735 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.783746 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:35.783754 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:35.783819 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:35.824000 1157708 cri.go:89] found id: ""
	I0318 13:51:35.824031 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.824042 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:35.824049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:35.824117 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:35.860260 1157708 cri.go:89] found id: ""
	I0318 13:51:35.860289 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.860299 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:35.860308 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:35.860388 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:35.895154 1157708 cri.go:89] found id: ""
	I0318 13:51:35.895189 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.895201 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:35.895209 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:35.895276 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:35.936916 1157708 cri.go:89] found id: ""
	I0318 13:51:35.936942 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.936951 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:35.936961 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:35.936977 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:35.951715 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:35.951745 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:36.027431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:36.027457 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:36.027474 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:36.113339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:36.113386 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:36.160132 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:36.160170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:36.151331 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.650891 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.208500 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.209692 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.709776 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.807423 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.809226 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.711710 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:38.726104 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:38.726162 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:38.763251 1157708 cri.go:89] found id: ""
	I0318 13:51:38.763281 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.763291 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:38.763300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:38.763364 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:38.802521 1157708 cri.go:89] found id: ""
	I0318 13:51:38.802548 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.802556 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:38.802562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:38.802616 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:38.843778 1157708 cri.go:89] found id: ""
	I0318 13:51:38.843817 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.843831 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:38.843839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:38.843909 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:38.884966 1157708 cri.go:89] found id: ""
	I0318 13:51:38.885003 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.885015 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:38.885024 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:38.885090 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:38.925653 1157708 cri.go:89] found id: ""
	I0318 13:51:38.925681 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.925690 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:38.925696 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:38.925757 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:38.964126 1157708 cri.go:89] found id: ""
	I0318 13:51:38.964156 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.964169 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:38.964177 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:38.964228 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:39.004864 1157708 cri.go:89] found id: ""
	I0318 13:51:39.004898 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.004910 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:39.004919 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:39.004991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:39.041555 1157708 cri.go:89] found id: ""
	I0318 13:51:39.041588 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.041600 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:39.041611 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:39.041626 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:39.092984 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:39.093019 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:39.110492 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:39.110526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:39.186785 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:39.186848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:39.186872 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:39.272847 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:39.272891 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:41.829404 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:41.843407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:41.843479 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:41.883129 1157708 cri.go:89] found id: ""
	I0318 13:51:41.883164 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.883175 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:41.883184 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:41.883246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:41.924083 1157708 cri.go:89] found id: ""
	I0318 13:51:41.924123 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.924136 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:41.924144 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:41.924209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:41.963029 1157708 cri.go:89] found id: ""
	I0318 13:51:41.963058 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.963069 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:41.963084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:41.963155 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:42.003393 1157708 cri.go:89] found id: ""
	I0318 13:51:42.003430 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.003442 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:42.003450 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:42.003511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:42.041938 1157708 cri.go:89] found id: ""
	I0318 13:51:42.041968 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.041977 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:42.041983 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:42.042044 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:42.079685 1157708 cri.go:89] found id: ""
	I0318 13:51:42.079718 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.079731 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:42.079740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:42.079805 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:42.118112 1157708 cri.go:89] found id: ""
	I0318 13:51:42.118144 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.118156 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:42.118164 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:42.118230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:42.157287 1157708 cri.go:89] found id: ""
	I0318 13:51:42.157319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.157331 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:42.157343 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:42.157360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:42.213006 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:42.213038 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:42.228452 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:42.228481 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:42.302523 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:42.302545 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:42.302558 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:42.387994 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:42.388062 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:40.651272 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:43.151009 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.208825 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.211676 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.310765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.313778 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.934501 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:44.949163 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:44.949245 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:44.991885 1157708 cri.go:89] found id: ""
	I0318 13:51:44.991914 1157708 logs.go:276] 0 containers: []
	W0318 13:51:44.991924 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:44.991931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:44.992008 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:45.029868 1157708 cri.go:89] found id: ""
	I0318 13:51:45.029904 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.029915 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:45.029922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:45.030017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:45.067755 1157708 cri.go:89] found id: ""
	I0318 13:51:45.067785 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.067794 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:45.067803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:45.067857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:45.106296 1157708 cri.go:89] found id: ""
	I0318 13:51:45.106323 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.106333 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:45.106339 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:45.106405 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:45.145746 1157708 cri.go:89] found id: ""
	I0318 13:51:45.145784 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.145797 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:45.145805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:45.145868 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:45.191960 1157708 cri.go:89] found id: ""
	I0318 13:51:45.191998 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.192010 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:45.192019 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:45.192089 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:45.231436 1157708 cri.go:89] found id: ""
	I0318 13:51:45.231470 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.231483 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:45.231491 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:45.231559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:45.274521 1157708 cri.go:89] found id: ""
	I0318 13:51:45.274554 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.274565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:45.274577 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:45.274595 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:45.338539 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:45.338580 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:45.353917 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:45.353947 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:45.447734 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:45.447755 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:45.447768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:45.530098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:45.530140 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:45.653161 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.150841 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.708808 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.209076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.808315 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.311406 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.077992 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:48.092203 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:48.092273 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:48.133136 1157708 cri.go:89] found id: ""
	I0318 13:51:48.133172 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.133183 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:48.133191 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:48.133259 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:48.177727 1157708 cri.go:89] found id: ""
	I0318 13:51:48.177756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.177768 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:48.177775 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:48.177843 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:48.217574 1157708 cri.go:89] found id: ""
	I0318 13:51:48.217600 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.217608 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:48.217614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:48.217676 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:48.258900 1157708 cri.go:89] found id: ""
	I0318 13:51:48.258933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.258947 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:48.258955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:48.259046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:48.299527 1157708 cri.go:89] found id: ""
	I0318 13:51:48.299562 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.299573 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:48.299581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:48.299650 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:48.339692 1157708 cri.go:89] found id: ""
	I0318 13:51:48.339723 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.339732 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:48.339740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:48.339791 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:48.378737 1157708 cri.go:89] found id: ""
	I0318 13:51:48.378764 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.378773 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:48.378779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:48.378841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:48.414593 1157708 cri.go:89] found id: ""
	I0318 13:51:48.414621 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.414629 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:48.414639 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:48.414654 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:48.430232 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:48.430264 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:48.513313 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:48.513335 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:48.513353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:48.594681 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:48.594721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:48.638681 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:48.638720 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.189510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:51.204296 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:51.204383 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:51.248285 1157708 cri.go:89] found id: ""
	I0318 13:51:51.248311 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.248331 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:51.248340 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:51.248414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:51.289022 1157708 cri.go:89] found id: ""
	I0318 13:51:51.289055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.289068 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:51.289077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:51.289144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:51.329367 1157708 cri.go:89] found id: ""
	I0318 13:51:51.329405 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.329414 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:51.329420 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:51.329477 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:51.370909 1157708 cri.go:89] found id: ""
	I0318 13:51:51.370948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.370960 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:51.370970 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:51.371043 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:51.419447 1157708 cri.go:89] found id: ""
	I0318 13:51:51.419486 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.419498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:51.419506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:51.419573 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:51.466302 1157708 cri.go:89] found id: ""
	I0318 13:51:51.466336 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.466348 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:51.466356 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:51.466441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:51.505593 1157708 cri.go:89] found id: ""
	I0318 13:51:51.505631 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.505644 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:51.505652 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:51.505724 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:51.543815 1157708 cri.go:89] found id: ""
	I0318 13:51:51.543843 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.543852 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:51.543863 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:51.543885 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.596271 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:51.596305 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:51.612441 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:51.612477 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:51.690591 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:51.690614 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:51.690631 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:51.771781 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:51.771821 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:50.650088 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:52.650307 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.710583 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.208629 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.808743 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.309915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.319626 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:54.334041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:54.334113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:54.372090 1157708 cri.go:89] found id: ""
	I0318 13:51:54.372120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.372132 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:54.372139 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:54.372196 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:54.412513 1157708 cri.go:89] found id: ""
	I0318 13:51:54.412567 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.412580 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:54.412588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:54.412662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:54.453143 1157708 cri.go:89] found id: ""
	I0318 13:51:54.453176 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.453188 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:54.453196 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:54.453262 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:54.497908 1157708 cri.go:89] found id: ""
	I0318 13:51:54.497940 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.497949 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:54.497957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:54.498025 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:54.539044 1157708 cri.go:89] found id: ""
	I0318 13:51:54.539072 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.539081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:54.539086 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:54.539151 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:54.578916 1157708 cri.go:89] found id: ""
	I0318 13:51:54.578944 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.578951 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:54.578958 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:54.579027 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:54.617339 1157708 cri.go:89] found id: ""
	I0318 13:51:54.617366 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.617375 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:54.617380 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:54.617436 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:54.661288 1157708 cri.go:89] found id: ""
	I0318 13:51:54.661309 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.661318 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:54.661328 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:54.661344 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:54.740710 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:54.740751 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:54.789136 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:54.789176 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.844585 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:54.844627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:54.860304 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:54.860351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:54.945305 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:57.445800 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:57.459294 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:57.459368 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:57.497411 1157708 cri.go:89] found id: ""
	I0318 13:51:57.497441 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.497449 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:57.497456 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:57.497521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:57.535629 1157708 cri.go:89] found id: ""
	I0318 13:51:57.535663 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.535675 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:57.535684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:57.535749 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:57.572980 1157708 cri.go:89] found id: ""
	I0318 13:51:57.573008 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.573017 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:57.573023 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:57.573071 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:57.622949 1157708 cri.go:89] found id: ""
	I0318 13:51:57.622984 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.622997 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:57.623005 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:57.623070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:57.659877 1157708 cri.go:89] found id: ""
	I0318 13:51:57.659910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.659921 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:57.659928 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:57.659991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:57.705399 1157708 cri.go:89] found id: ""
	I0318 13:51:57.705481 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.705495 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:57.705504 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:57.705566 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:57.748035 1157708 cri.go:89] found id: ""
	I0318 13:51:57.748062 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.748073 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:57.748084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:57.748144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:57.801942 1157708 cri.go:89] found id: ""
	I0318 13:51:57.801976 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.801987 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:57.801999 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:57.802017 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:57.900157 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:57.900204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:57.946179 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:57.946219 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.651363 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:57.151268 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.208925 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.708089 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.807605 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.808479 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.307740 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.000369 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:58.000412 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:58.016179 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:58.016211 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:58.101766 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:00.602151 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:00.617466 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:00.617531 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:00.661294 1157708 cri.go:89] found id: ""
	I0318 13:52:00.661328 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.661336 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:00.661342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:00.661400 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:00.706227 1157708 cri.go:89] found id: ""
	I0318 13:52:00.706257 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.706267 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:00.706275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:00.706342 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:00.746482 1157708 cri.go:89] found id: ""
	I0318 13:52:00.746515 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.746528 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:00.746536 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:00.746600 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:00.789242 1157708 cri.go:89] found id: ""
	I0318 13:52:00.789272 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.789281 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:00.789287 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:00.789348 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:00.832463 1157708 cri.go:89] found id: ""
	I0318 13:52:00.832503 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.832514 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:00.832522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:00.832581 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:00.869790 1157708 cri.go:89] found id: ""
	I0318 13:52:00.869819 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.869830 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:00.869839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:00.869904 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:00.909656 1157708 cri.go:89] found id: ""
	I0318 13:52:00.909685 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.909693 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:00.909700 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:00.909754 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:00.953818 1157708 cri.go:89] found id: ""
	I0318 13:52:00.953856 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.953868 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:00.953882 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:00.953898 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:01.032822 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:01.032848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:01.032865 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:01.111701 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:01.111747 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:01.168270 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:01.168300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:01.220376 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:01.220408 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:59.650359 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.650627 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.651830 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:00.709561 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.207829 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.808915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:06.307915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.737354 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:03.756282 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:03.756382 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:03.804716 1157708 cri.go:89] found id: ""
	I0318 13:52:03.804757 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.804768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:03.804777 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:03.804838 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:03.864559 1157708 cri.go:89] found id: ""
	I0318 13:52:03.864596 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.864609 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:03.864617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:03.864687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:03.918397 1157708 cri.go:89] found id: ""
	I0318 13:52:03.918425 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.918433 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:03.918439 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:03.918504 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:03.961729 1157708 cri.go:89] found id: ""
	I0318 13:52:03.961762 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.961773 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:03.961780 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:03.961856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:04.006261 1157708 cri.go:89] found id: ""
	I0318 13:52:04.006299 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.006311 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:04.006319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:04.006404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:04.050284 1157708 cri.go:89] found id: ""
	I0318 13:52:04.050313 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.050321 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:04.050327 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:04.050384 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:04.093789 1157708 cri.go:89] found id: ""
	I0318 13:52:04.093827 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.093839 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:04.093847 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:04.093916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:04.135047 1157708 cri.go:89] found id: ""
	I0318 13:52:04.135091 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.135110 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:04.135124 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:04.135142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:04.192899 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:04.192937 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:04.209080 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:04.209130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:04.286388 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:04.286413 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:04.286428 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:04.371836 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:04.371877 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:06.923039 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:06.938743 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:06.938826 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:06.984600 1157708 cri.go:89] found id: ""
	I0318 13:52:06.984634 1157708 logs.go:276] 0 containers: []
	W0318 13:52:06.984646 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:06.984655 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:06.984721 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:07.023849 1157708 cri.go:89] found id: ""
	I0318 13:52:07.023891 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.023914 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:07.023922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:07.023984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:07.071972 1157708 cri.go:89] found id: ""
	I0318 13:52:07.072002 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.072015 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:07.072022 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:07.072087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:07.109070 1157708 cri.go:89] found id: ""
	I0318 13:52:07.109105 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.109118 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:07.109126 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:07.109183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:07.149879 1157708 cri.go:89] found id: ""
	I0318 13:52:07.149910 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.149918 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:07.149925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:07.149990 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:07.195946 1157708 cri.go:89] found id: ""
	I0318 13:52:07.195976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.195987 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:07.195995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:07.196062 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:07.238126 1157708 cri.go:89] found id: ""
	I0318 13:52:07.238152 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.238162 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:07.238168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:07.238233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:07.278218 1157708 cri.go:89] found id: ""
	I0318 13:52:07.278255 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.278268 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:07.278282 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:07.278300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:07.294926 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:07.294955 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:07.383431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:07.383455 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:07.383468 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:07.467306 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:07.467348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:07.515996 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:07.516028 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:06.151546 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.162392 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:05.208765 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:07.210243 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:09.708076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.309045 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.807773 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.071945 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:10.088587 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:10.088654 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:10.130528 1157708 cri.go:89] found id: ""
	I0318 13:52:10.130566 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.130579 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:10.130588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:10.130663 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:10.173113 1157708 cri.go:89] found id: ""
	I0318 13:52:10.173150 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.173168 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:10.173178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:10.173243 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:10.218941 1157708 cri.go:89] found id: ""
	I0318 13:52:10.218976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.218987 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:10.218996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:10.219068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:10.262331 1157708 cri.go:89] found id: ""
	I0318 13:52:10.262368 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.262381 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:10.262389 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:10.262460 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:10.303329 1157708 cri.go:89] found id: ""
	I0318 13:52:10.303363 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.303378 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:10.303386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:10.303457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:10.344458 1157708 cri.go:89] found id: ""
	I0318 13:52:10.344486 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.344497 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:10.344505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:10.344567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:10.386753 1157708 cri.go:89] found id: ""
	I0318 13:52:10.386786 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.386797 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:10.386806 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:10.386876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:10.425922 1157708 cri.go:89] found id: ""
	I0318 13:52:10.425954 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.425965 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:10.425978 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:10.426000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:10.441134 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:10.441168 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:10.514865 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:10.514899 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:10.514916 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:10.592061 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:10.592105 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:10.642900 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:10.642935 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:10.651432 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.150537 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.208498 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:14.209684 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.808250 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:15.308639 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.199176 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:13.215155 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:13.215232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:13.256107 1157708 cri.go:89] found id: ""
	I0318 13:52:13.256139 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.256151 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:13.256160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:13.256231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:13.296562 1157708 cri.go:89] found id: ""
	I0318 13:52:13.296597 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.296608 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:13.296615 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:13.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:13.336633 1157708 cri.go:89] found id: ""
	I0318 13:52:13.336662 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.336672 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:13.336678 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:13.336737 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:13.382597 1157708 cri.go:89] found id: ""
	I0318 13:52:13.382639 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.382654 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:13.382663 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:13.382733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:13.430257 1157708 cri.go:89] found id: ""
	I0318 13:52:13.430292 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.430304 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:13.430312 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:13.430373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:13.466854 1157708 cri.go:89] found id: ""
	I0318 13:52:13.466881 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.466889 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:13.466896 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:13.466945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:13.510297 1157708 cri.go:89] found id: ""
	I0318 13:52:13.510333 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.510344 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:13.510352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:13.510420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:13.551476 1157708 cri.go:89] found id: ""
	I0318 13:52:13.551508 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.551517 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:13.551528 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:13.551542 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:13.634561 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:13.634585 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:13.634598 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:13.720088 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:13.720129 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:13.760621 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:13.760659 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:13.817311 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:13.817350 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.334094 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:16.349779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:16.349866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:16.394131 1157708 cri.go:89] found id: ""
	I0318 13:52:16.394157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.394167 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:16.394175 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:16.394239 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:16.438185 1157708 cri.go:89] found id: ""
	I0318 13:52:16.438232 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.438245 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:16.438264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:16.438335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:16.476872 1157708 cri.go:89] found id: ""
	I0318 13:52:16.476920 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.476932 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:16.476939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:16.477007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:16.518226 1157708 cri.go:89] found id: ""
	I0318 13:52:16.518253 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.518262 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:16.518269 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:16.518327 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:16.559119 1157708 cri.go:89] found id: ""
	I0318 13:52:16.559160 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.559174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:16.559182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:16.559260 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:16.600050 1157708 cri.go:89] found id: ""
	I0318 13:52:16.600079 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.600088 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:16.600094 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:16.600160 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:16.640621 1157708 cri.go:89] found id: ""
	I0318 13:52:16.640649 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.640660 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:16.640668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:16.640733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:16.680541 1157708 cri.go:89] found id: ""
	I0318 13:52:16.680571 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.680580 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:16.680590 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:16.680602 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:16.766378 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:16.766415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:16.811846 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:16.811883 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:16.871940 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:16.871981 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.887494 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:16.887521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:16.961924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:15.650599 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.650902 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:16.710336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.207426 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.807338 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.809418 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.462316 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:19.478819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:19.478885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:19.523280 1157708 cri.go:89] found id: ""
	I0318 13:52:19.523314 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.523334 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:19.523342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:19.523417 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:19.560675 1157708 cri.go:89] found id: ""
	I0318 13:52:19.560708 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.560717 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:19.560725 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:19.560790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:19.598739 1157708 cri.go:89] found id: ""
	I0318 13:52:19.598766 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.598773 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:19.598781 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:19.598846 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:19.639928 1157708 cri.go:89] found id: ""
	I0318 13:52:19.639960 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.639969 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:19.639975 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:19.640030 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:19.686084 1157708 cri.go:89] found id: ""
	I0318 13:52:19.686134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.686153 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:19.686160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:19.686231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:19.725449 1157708 cri.go:89] found id: ""
	I0318 13:52:19.725481 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.725491 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:19.725497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:19.725559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:19.763855 1157708 cri.go:89] found id: ""
	I0318 13:52:19.763886 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.763897 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:19.763905 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:19.763976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:19.805783 1157708 cri.go:89] found id: ""
	I0318 13:52:19.805813 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.805824 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:19.805836 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:19.805852 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.883873 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:19.883914 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:19.926368 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:19.926406 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:19.981137 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:19.981181 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:19.996242 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:19.996269 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:20.077880 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:22.578045 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:22.594170 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:22.594247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:22.637241 1157708 cri.go:89] found id: ""
	I0318 13:52:22.637276 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.637289 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:22.637298 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:22.637363 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:22.679877 1157708 cri.go:89] found id: ""
	I0318 13:52:22.679904 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.679912 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:22.679918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:22.679981 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:22.721865 1157708 cri.go:89] found id: ""
	I0318 13:52:22.721890 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.721903 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:22.721912 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:22.721982 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:22.763208 1157708 cri.go:89] found id: ""
	I0318 13:52:22.763242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.763255 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:22.763264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:22.763329 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:22.802038 1157708 cri.go:89] found id: ""
	I0318 13:52:22.802071 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.802081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:22.802089 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:22.802170 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:22.841206 1157708 cri.go:89] found id: ""
	I0318 13:52:22.841242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.841254 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:22.841263 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:22.841328 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:22.885159 1157708 cri.go:89] found id: ""
	I0318 13:52:22.885197 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.885209 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:22.885218 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:22.885289 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:22.925346 1157708 cri.go:89] found id: ""
	I0318 13:52:22.925373 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.925382 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:22.925391 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:22.925407 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.654611 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.152365 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:21.208979 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.210660 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.308290 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:24.310006 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.006158 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:23.006193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:23.053932 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:23.053961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:23.107728 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:23.107768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:23.125708 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:23.125740 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:23.202609 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:25.703096 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:25.718617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:25.718689 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:25.756504 1157708 cri.go:89] found id: ""
	I0318 13:52:25.756530 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.756538 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:25.756544 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:25.756608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:25.795103 1157708 cri.go:89] found id: ""
	I0318 13:52:25.795140 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.795152 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:25.795160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:25.795240 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:25.839908 1157708 cri.go:89] found id: ""
	I0318 13:52:25.839945 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.839957 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:25.839971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:25.840038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:25.881677 1157708 cri.go:89] found id: ""
	I0318 13:52:25.881711 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.881723 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:25.881732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:25.881802 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:25.923356 1157708 cri.go:89] found id: ""
	I0318 13:52:25.923386 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.923397 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:25.923410 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:25.923469 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:25.961661 1157708 cri.go:89] found id: ""
	I0318 13:52:25.961693 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.961705 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:25.961713 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:25.961785 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:26.003198 1157708 cri.go:89] found id: ""
	I0318 13:52:26.003236 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.003248 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:26.003256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:26.003319 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:26.041436 1157708 cri.go:89] found id: ""
	I0318 13:52:26.041471 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.041483 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:26.041496 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:26.041515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:26.056679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:26.056716 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:26.143900 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:26.143926 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:26.143946 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:26.226929 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:26.226964 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:26.288519 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:26.288560 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:24.652661 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.152317 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:25.708488 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.708931 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:26.807624 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.809030 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.308980 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.846205 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:28.861117 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:28.861190 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:28.906990 1157708 cri.go:89] found id: ""
	I0318 13:52:28.907022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.907030 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:28.907036 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:28.907099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:28.946271 1157708 cri.go:89] found id: ""
	I0318 13:52:28.946309 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.946322 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:28.946332 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:28.946403 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:28.990158 1157708 cri.go:89] found id: ""
	I0318 13:52:28.990185 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.990193 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:28.990199 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:28.990251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:29.035089 1157708 cri.go:89] found id: ""
	I0318 13:52:29.035123 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.035134 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:29.035143 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:29.035209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:29.076991 1157708 cri.go:89] found id: ""
	I0318 13:52:29.077022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.077033 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:29.077041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:29.077104 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:29.117106 1157708 cri.go:89] found id: ""
	I0318 13:52:29.117134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.117150 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:29.117157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:29.117209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:29.159675 1157708 cri.go:89] found id: ""
	I0318 13:52:29.159704 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.159714 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:29.159722 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:29.159787 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:29.202130 1157708 cri.go:89] found id: ""
	I0318 13:52:29.202157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.202166 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:29.202176 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:29.202189 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:29.258343 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:29.258390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:29.275314 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:29.275360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:29.359842 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:29.359989 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:29.360036 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:29.446021 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:29.446072 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:31.990431 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:32.007443 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:32.007508 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:32.051028 1157708 cri.go:89] found id: ""
	I0318 13:52:32.051061 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.051070 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:32.051076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:32.051144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:32.092914 1157708 cri.go:89] found id: ""
	I0318 13:52:32.092950 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.092962 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:32.092972 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:32.093045 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:32.154257 1157708 cri.go:89] found id: ""
	I0318 13:52:32.154291 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.154302 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:32.154309 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:32.154375 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:32.200185 1157708 cri.go:89] found id: ""
	I0318 13:52:32.200224 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.200236 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:32.200244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:32.200309 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:32.248927 1157708 cri.go:89] found id: ""
	I0318 13:52:32.248961 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.248974 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:32.248982 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:32.249051 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:32.289829 1157708 cri.go:89] found id: ""
	I0318 13:52:32.289861 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.289870 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:32.289876 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:32.289934 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:32.334346 1157708 cri.go:89] found id: ""
	I0318 13:52:32.334379 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.334387 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:32.334393 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:32.334457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:32.378718 1157708 cri.go:89] found id: ""
	I0318 13:52:32.378761 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.378770 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:32.378780 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:32.378795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:32.434626 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:32.434667 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:32.451366 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:32.451402 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:32.532868 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:32.532907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:32.532924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:32.617556 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:32.617597 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:29.650409 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.651019 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:30.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:32.214101 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:34.710602 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:33.807499 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.807738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.165067 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:35.181325 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:35.181404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:35.220570 1157708 cri.go:89] found id: ""
	I0318 13:52:35.220601 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.220612 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:35.220619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:35.220684 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:35.263798 1157708 cri.go:89] found id: ""
	I0318 13:52:35.263830 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.263841 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:35.263848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:35.263915 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:35.309447 1157708 cri.go:89] found id: ""
	I0318 13:52:35.309477 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.309489 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:35.309497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:35.309567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:35.353444 1157708 cri.go:89] found id: ""
	I0318 13:52:35.353472 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.353484 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:35.353493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:35.353556 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:35.394563 1157708 cri.go:89] found id: ""
	I0318 13:52:35.394591 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.394599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:35.394604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:35.394662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:35.433866 1157708 cri.go:89] found id: ""
	I0318 13:52:35.433899 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.433908 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:35.433915 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:35.433970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:35.482769 1157708 cri.go:89] found id: ""
	I0318 13:52:35.482808 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.482820 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:35.482829 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:35.482899 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:35.521465 1157708 cri.go:89] found id: ""
	I0318 13:52:35.521498 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.521509 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:35.521520 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:35.521534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:35.577759 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:35.577799 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:35.593052 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:35.593084 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:35.672751 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:35.672773 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:35.672787 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:35.752118 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:35.752171 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:34.157429 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:36.650725 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.652096 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:37.209435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:39.710020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.312679 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:40.807379 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.296677 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:38.312261 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:38.312365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:38.350328 1157708 cri.go:89] found id: ""
	I0318 13:52:38.350362 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.350374 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:38.350382 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:38.350457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:38.389891 1157708 cri.go:89] found id: ""
	I0318 13:52:38.389927 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.389939 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:38.389947 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:38.390005 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:38.430268 1157708 cri.go:89] found id: ""
	I0318 13:52:38.430296 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.430305 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:38.430311 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:38.430365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:38.470830 1157708 cri.go:89] found id: ""
	I0318 13:52:38.470859 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.470873 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:38.470880 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:38.470945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:38.510501 1157708 cri.go:89] found id: ""
	I0318 13:52:38.510538 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.510552 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:38.510560 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:38.510618 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:38.594899 1157708 cri.go:89] found id: ""
	I0318 13:52:38.594926 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.594935 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:38.594942 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:38.595021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:38.649095 1157708 cri.go:89] found id: ""
	I0318 13:52:38.649121 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.649129 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:38.649136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:38.649192 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:38.695263 1157708 cri.go:89] found id: ""
	I0318 13:52:38.695295 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.695307 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:38.695320 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:38.695336 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:38.780624 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:38.780666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:38.825294 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:38.825335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:38.877548 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:38.877596 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:38.893289 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:38.893319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:38.971752 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.472865 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:41.487371 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:41.487484 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:41.524691 1157708 cri.go:89] found id: ""
	I0318 13:52:41.524724 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.524737 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:41.524746 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:41.524812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:41.564094 1157708 cri.go:89] found id: ""
	I0318 13:52:41.564125 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.564137 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:41.564145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:41.564210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:41.600019 1157708 cri.go:89] found id: ""
	I0318 13:52:41.600047 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.600058 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:41.600064 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:41.600142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:41.638320 1157708 cri.go:89] found id: ""
	I0318 13:52:41.638350 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.638363 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:41.638372 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:41.638438 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:41.680763 1157708 cri.go:89] found id: ""
	I0318 13:52:41.680798 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.680810 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:41.680818 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:41.680894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:41.720645 1157708 cri.go:89] found id: ""
	I0318 13:52:41.720674 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.720683 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:41.720690 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:41.720741 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:41.759121 1157708 cri.go:89] found id: ""
	I0318 13:52:41.759151 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.759185 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:41.759195 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:41.759264 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:41.797006 1157708 cri.go:89] found id: ""
	I0318 13:52:41.797034 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.797043 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:41.797053 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:41.797070 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:41.853315 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:41.853353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:41.869920 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:41.869952 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:41.947187 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.947219 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:41.947235 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:42.025475 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:42.025515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:41.151466 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.153616 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:42.207999 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.709760 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.310812 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:45.808394 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.574724 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:44.598990 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:44.599068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:44.649051 1157708 cri.go:89] found id: ""
	I0318 13:52:44.649137 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.649168 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:44.649180 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:44.649254 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:44.686423 1157708 cri.go:89] found id: ""
	I0318 13:52:44.686459 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.686468 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:44.686473 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:44.686536 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:44.726534 1157708 cri.go:89] found id: ""
	I0318 13:52:44.726564 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.726575 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:44.726583 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:44.726653 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:44.771190 1157708 cri.go:89] found id: ""
	I0318 13:52:44.771220 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.771232 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:44.771240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:44.771311 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:44.811577 1157708 cri.go:89] found id: ""
	I0318 13:52:44.811602 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.811611 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:44.811618 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:44.811677 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:44.850717 1157708 cri.go:89] found id: ""
	I0318 13:52:44.850744 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.850756 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:44.850765 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:44.850824 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:44.890294 1157708 cri.go:89] found id: ""
	I0318 13:52:44.890321 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.890330 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:44.890344 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:44.890401 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:44.930690 1157708 cri.go:89] found id: ""
	I0318 13:52:44.930720 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.930730 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:44.930741 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:44.930757 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:44.946509 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:44.946544 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:45.029748 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:45.029777 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:45.029795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:45.111348 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:45.111392 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:45.165156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:45.165193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:47.720701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:47.734457 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:47.734520 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:47.771273 1157708 cri.go:89] found id: ""
	I0318 13:52:47.771304 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.771313 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:47.771319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:47.771370 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:47.813779 1157708 cri.go:89] found id: ""
	I0318 13:52:47.813806 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.813816 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:47.813824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:47.813892 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:47.855547 1157708 cri.go:89] found id: ""
	I0318 13:52:47.855576 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.855584 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:47.855590 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:47.855640 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:47.892651 1157708 cri.go:89] found id: ""
	I0318 13:52:47.892684 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.892692 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:47.892697 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:47.892752 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:47.935457 1157708 cri.go:89] found id: ""
	I0318 13:52:47.935488 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.935498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:47.935505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:47.935567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:47.969335 1157708 cri.go:89] found id: ""
	I0318 13:52:47.969361 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.969370 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:47.969377 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:47.969441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:45.651171 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.151833 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:47.209014 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:49.710231 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.310467 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:50.807495 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.007305 1157708 cri.go:89] found id: ""
	I0318 13:52:48.007339 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.007349 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:48.007355 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:48.007416 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:48.050230 1157708 cri.go:89] found id: ""
	I0318 13:52:48.050264 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.050276 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:48.050289 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:48.050304 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:48.106946 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:48.106993 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:48.123805 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:48.123837 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:48.201881 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:48.201907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:48.201920 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:48.281533 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:48.281577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:50.829561 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:50.847462 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:50.847555 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:50.889731 1157708 cri.go:89] found id: ""
	I0318 13:52:50.889759 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.889768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:50.889774 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:50.889831 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:50.928176 1157708 cri.go:89] found id: ""
	I0318 13:52:50.928210 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.928222 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:50.928231 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:50.928294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:50.965737 1157708 cri.go:89] found id: ""
	I0318 13:52:50.965772 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.965786 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:50.965794 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:50.965866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:51.008038 1157708 cri.go:89] found id: ""
	I0318 13:52:51.008072 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.008081 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:51.008087 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:51.008159 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:51.050310 1157708 cri.go:89] found id: ""
	I0318 13:52:51.050340 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.050355 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:51.050363 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:51.050431 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:51.090514 1157708 cri.go:89] found id: ""
	I0318 13:52:51.090541 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.090550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:51.090556 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:51.090608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:51.131278 1157708 cri.go:89] found id: ""
	I0318 13:52:51.131305 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.131313 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:51.131320 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:51.131381 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:51.173370 1157708 cri.go:89] found id: ""
	I0318 13:52:51.173400 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.173411 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:51.173437 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:51.173464 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:51.260155 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:51.260204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:51.309963 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:51.309998 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:51.367838 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:51.367889 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:51.382542 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:51.382570 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:51.459258 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:50.650524 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.651804 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.208655 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:54.209701 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.808292 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:55.309417 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:53.960212 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:53.978939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:53.979004 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:54.030003 1157708 cri.go:89] found id: ""
	I0318 13:52:54.030038 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.030052 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:54.030060 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:54.030134 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:54.073487 1157708 cri.go:89] found id: ""
	I0318 13:52:54.073523 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.073535 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:54.073543 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:54.073611 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:54.115982 1157708 cri.go:89] found id: ""
	I0318 13:52:54.116010 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.116022 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:54.116029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:54.116099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:54.158320 1157708 cri.go:89] found id: ""
	I0318 13:52:54.158348 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.158359 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:54.158366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:54.158433 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:54.198911 1157708 cri.go:89] found id: ""
	I0318 13:52:54.198939 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.198948 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:54.198955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:54.199010 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:54.240628 1157708 cri.go:89] found id: ""
	I0318 13:52:54.240659 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.240671 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:54.240679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:54.240750 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:54.279377 1157708 cri.go:89] found id: ""
	I0318 13:52:54.279409 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.279418 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:54.279424 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:54.279493 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:54.324160 1157708 cri.go:89] found id: ""
	I0318 13:52:54.324192 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.324205 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:54.324218 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:54.324237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:54.371487 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:54.371527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:54.423487 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:54.423526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:54.438773 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:54.438800 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:54.518788 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:54.518810 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:54.518825 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.103590 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:57.118866 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:57.118932 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:57.159354 1157708 cri.go:89] found id: ""
	I0318 13:52:57.159383 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.159393 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:57.159399 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:57.159458 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:57.201114 1157708 cri.go:89] found id: ""
	I0318 13:52:57.201148 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.201159 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:57.201167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:57.201233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:57.242172 1157708 cri.go:89] found id: ""
	I0318 13:52:57.242207 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.242217 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:57.242224 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:57.242287 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:57.282578 1157708 cri.go:89] found id: ""
	I0318 13:52:57.282617 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.282629 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:57.282637 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:57.282706 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:57.323682 1157708 cri.go:89] found id: ""
	I0318 13:52:57.323707 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.323715 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:57.323721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:57.323771 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:57.364946 1157708 cri.go:89] found id: ""
	I0318 13:52:57.364980 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.364991 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:57.365003 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:57.365076 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:57.407466 1157708 cri.go:89] found id: ""
	I0318 13:52:57.407495 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.407505 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:57.407511 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:57.407568 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:57.454663 1157708 cri.go:89] found id: ""
	I0318 13:52:57.454692 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.454701 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:57.454710 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:57.454722 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:57.509591 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:57.509633 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:57.525125 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:57.525155 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:57.602819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:57.602845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:57.602863 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.689001 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:57.689045 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:55.150589 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.152149 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:56.708493 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.208099 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.311780 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.312048 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:00.234252 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:00.249526 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:00.249615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:00.290131 1157708 cri.go:89] found id: ""
	I0318 13:53:00.290160 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.290171 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:00.290178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:00.290230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:00.337794 1157708 cri.go:89] found id: ""
	I0318 13:53:00.337828 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.337840 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:00.337848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:00.337907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:00.378188 1157708 cri.go:89] found id: ""
	I0318 13:53:00.378224 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.378236 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:00.378244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:00.378313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:00.418940 1157708 cri.go:89] found id: ""
	I0318 13:53:00.418972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.418981 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:00.418987 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:00.419039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:00.461471 1157708 cri.go:89] found id: ""
	I0318 13:53:00.461502 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.461511 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:00.461518 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:00.461572 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:00.498781 1157708 cri.go:89] found id: ""
	I0318 13:53:00.498812 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.498821 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:00.498827 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:00.498885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:00.540359 1157708 cri.go:89] found id: ""
	I0318 13:53:00.540395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.540407 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:00.540414 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:00.540480 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:00.583597 1157708 cri.go:89] found id: ""
	I0318 13:53:00.583628 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.583636 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:00.583648 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:00.583666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:00.639498 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:00.639534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:00.655764 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:00.655792 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:00.742351 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:00.742386 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:00.742400 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:00.825250 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:00.825298 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:59.651495 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.651843 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.709438 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.208439 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.810519 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.308525 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:03.373938 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:03.389723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:03.389796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:03.429675 1157708 cri.go:89] found id: ""
	I0318 13:53:03.429710 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.429723 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:03.429732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:03.429803 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:03.468732 1157708 cri.go:89] found id: ""
	I0318 13:53:03.468768 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.468780 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:03.468788 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:03.468841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:03.510562 1157708 cri.go:89] found id: ""
	I0318 13:53:03.510589 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.510598 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:03.510604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:03.510667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:03.549842 1157708 cri.go:89] found id: ""
	I0318 13:53:03.549896 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.549909 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:03.549918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:03.549984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:03.590036 1157708 cri.go:89] found id: ""
	I0318 13:53:03.590076 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.590086 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:03.590093 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:03.590146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:03.635546 1157708 cri.go:89] found id: ""
	I0318 13:53:03.635573 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.635585 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:03.635593 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:03.635660 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:03.678634 1157708 cri.go:89] found id: ""
	I0318 13:53:03.678663 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.678671 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:03.678677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:03.678735 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:03.719666 1157708 cri.go:89] found id: ""
	I0318 13:53:03.719698 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.719709 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:03.719721 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:03.719736 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:03.762353 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:03.762388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:03.817484 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:03.817521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:03.832820 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:03.832850 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:03.913094 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:03.913115 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:03.913130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:06.502556 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:06.517682 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:06.517745 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:06.562167 1157708 cri.go:89] found id: ""
	I0318 13:53:06.562202 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.562215 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:06.562223 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:06.562294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:06.601910 1157708 cri.go:89] found id: ""
	I0318 13:53:06.601945 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.601954 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:06.601962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:06.602022 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:06.640652 1157708 cri.go:89] found id: ""
	I0318 13:53:06.640683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.640694 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:06.640702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:06.640778 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:06.686781 1157708 cri.go:89] found id: ""
	I0318 13:53:06.686809 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.686818 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:06.686824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:06.686893 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:06.727080 1157708 cri.go:89] found id: ""
	I0318 13:53:06.727107 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.727115 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:06.727121 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:06.727173 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:06.764550 1157708 cri.go:89] found id: ""
	I0318 13:53:06.764575 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.764583 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:06.764589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:06.764641 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:06.803978 1157708 cri.go:89] found id: ""
	I0318 13:53:06.804009 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.804019 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:06.804027 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:06.804091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:06.843983 1157708 cri.go:89] found id: ""
	I0318 13:53:06.844016 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.844027 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:06.844040 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:06.844058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:06.905389 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:06.905424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:06.956888 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:06.956924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:06.973551 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:06.973594 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:07.045945 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:07.045973 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:07.045991 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:04.150852 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.151454 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.656073 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.211223 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.707939 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.808218 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.309991 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:11.310190 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.635227 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:09.650166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:09.650246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:09.695126 1157708 cri.go:89] found id: ""
	I0318 13:53:09.695153 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.695162 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:09.695168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:09.695221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:09.740475 1157708 cri.go:89] found id: ""
	I0318 13:53:09.740507 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.740516 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:09.740522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:09.740591 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:09.779078 1157708 cri.go:89] found id: ""
	I0318 13:53:09.779108 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.779119 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:09.779128 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:09.779186 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:09.821252 1157708 cri.go:89] found id: ""
	I0318 13:53:09.821285 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.821297 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:09.821306 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:09.821376 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:09.860500 1157708 cri.go:89] found id: ""
	I0318 13:53:09.860537 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.860550 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:09.860558 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:09.860622 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:09.903447 1157708 cri.go:89] found id: ""
	I0318 13:53:09.903475 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.903486 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:09.903494 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:09.903550 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:09.941620 1157708 cri.go:89] found id: ""
	I0318 13:53:09.941648 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.941661 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:09.941679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:09.941731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:09.980066 1157708 cri.go:89] found id: ""
	I0318 13:53:09.980101 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.980113 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:09.980125 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:09.980142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:10.036960 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:10.037000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:10.051329 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:10.051361 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:10.130896 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:10.130925 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:10.130942 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:10.212205 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:10.212236 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:12.754623 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:12.769956 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:12.770034 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:12.809006 1157708 cri.go:89] found id: ""
	I0318 13:53:12.809032 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.809043 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:12.809051 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:12.809113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:12.852354 1157708 cri.go:89] found id: ""
	I0318 13:53:12.852390 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.852400 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:12.852407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:12.852476 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:12.891891 1157708 cri.go:89] found id: ""
	I0318 13:53:12.891923 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.891933 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:12.891940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:12.891991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:12.931753 1157708 cri.go:89] found id: ""
	I0318 13:53:12.931785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.931795 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:12.931803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:12.931872 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:12.971622 1157708 cri.go:89] found id: ""
	I0318 13:53:12.971653 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.971662 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:12.971669 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:12.971731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:11.151234 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.157081 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:10.708177 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.209203 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.315183 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.808738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.009893 1157708 cri.go:89] found id: ""
	I0318 13:53:13.009930 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.009943 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:13.009952 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:13.010021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:13.045361 1157708 cri.go:89] found id: ""
	I0318 13:53:13.045396 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.045404 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:13.045411 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:13.045474 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:13.087659 1157708 cri.go:89] found id: ""
	I0318 13:53:13.087686 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.087696 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:13.087706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:13.087721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:13.129979 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:13.130014 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:13.183802 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:13.183836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:13.198808 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:13.198840 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:13.272736 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:13.272764 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:13.272783 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:15.870196 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:15.887480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:15.887551 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:15.923871 1157708 cri.go:89] found id: ""
	I0318 13:53:15.923899 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.923907 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:15.923913 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:15.923976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:15.963870 1157708 cri.go:89] found id: ""
	I0318 13:53:15.963906 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.963917 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:15.963925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:15.963997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:16.009781 1157708 cri.go:89] found id: ""
	I0318 13:53:16.009815 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.009828 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:16.009837 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:16.009905 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:16.047673 1157708 cri.go:89] found id: ""
	I0318 13:53:16.047708 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.047718 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:16.047727 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:16.047793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:16.089419 1157708 cri.go:89] found id: ""
	I0318 13:53:16.089447 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.089455 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:16.089461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:16.089511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:16.133563 1157708 cri.go:89] found id: ""
	I0318 13:53:16.133594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.133604 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:16.133611 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:16.133685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:16.174369 1157708 cri.go:89] found id: ""
	I0318 13:53:16.174404 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.174415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:16.174423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:16.174491 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:16.219334 1157708 cri.go:89] found id: ""
	I0318 13:53:16.219360 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.219367 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:16.219376 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:16.219389 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:16.273468 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:16.273507 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:16.288584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:16.288612 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:16.366575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:16.366602 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:16.366620 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:16.451031 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:16.451071 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:15.650907 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.151434 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.708015 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:17.710036 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.311437 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.807854 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.997536 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:19.014995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:19.015065 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:19.064686 1157708 cri.go:89] found id: ""
	I0318 13:53:19.064719 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.064731 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:19.064739 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:19.064793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:19.110598 1157708 cri.go:89] found id: ""
	I0318 13:53:19.110629 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.110640 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:19.110648 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:19.110739 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:19.156628 1157708 cri.go:89] found id: ""
	I0318 13:53:19.156652 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.156660 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:19.156668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:19.156730 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:19.205993 1157708 cri.go:89] found id: ""
	I0318 13:53:19.206029 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.206042 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:19.206049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:19.206118 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:19.253902 1157708 cri.go:89] found id: ""
	I0318 13:53:19.253935 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.253952 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:19.253960 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:19.254036 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:19.296550 1157708 cri.go:89] found id: ""
	I0318 13:53:19.296583 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.296594 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:19.296602 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:19.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:19.337316 1157708 cri.go:89] found id: ""
	I0318 13:53:19.337349 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.337360 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:19.337369 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:19.337446 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:19.381503 1157708 cri.go:89] found id: ""
	I0318 13:53:19.381546 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.381565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:19.381579 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:19.381603 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:19.461665 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:19.461691 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:19.461707 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:19.548291 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:19.548348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:19.591296 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:19.591335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:19.648740 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:19.648776 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.164970 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:22.180740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:22.180806 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:22.223787 1157708 cri.go:89] found id: ""
	I0318 13:53:22.223820 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.223833 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:22.223840 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:22.223908 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:22.266751 1157708 cri.go:89] found id: ""
	I0318 13:53:22.266785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.266797 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:22.266805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:22.266876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:22.311669 1157708 cri.go:89] found id: ""
	I0318 13:53:22.311701 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.311712 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:22.311721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:22.311816 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:22.354687 1157708 cri.go:89] found id: ""
	I0318 13:53:22.354722 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.354733 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:22.354742 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:22.354807 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:22.395741 1157708 cri.go:89] found id: ""
	I0318 13:53:22.395767 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.395776 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:22.395782 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:22.395832 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:22.434506 1157708 cri.go:89] found id: ""
	I0318 13:53:22.434539 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.434550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:22.434559 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:22.434612 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:22.474583 1157708 cri.go:89] found id: ""
	I0318 13:53:22.474612 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.474621 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:22.474627 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:22.474690 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:22.521898 1157708 cri.go:89] found id: ""
	I0318 13:53:22.521943 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.521955 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:22.521968 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:22.521989 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.537679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:22.537711 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:22.619575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:22.619605 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:22.619621 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:22.704206 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:22.704265 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:22.753470 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:22.753502 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:20.650340 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.653036 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.213398 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.709150 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.808837 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.308831 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.311578 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:25.329917 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:25.329979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:25.373784 1157708 cri.go:89] found id: ""
	I0318 13:53:25.373818 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.373826 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:25.373833 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:25.373901 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:25.422490 1157708 cri.go:89] found id: ""
	I0318 13:53:25.422516 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.422526 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:25.422532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:25.422597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:25.459523 1157708 cri.go:89] found id: ""
	I0318 13:53:25.459552 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.459560 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:25.459567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:25.459627 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:25.495647 1157708 cri.go:89] found id: ""
	I0318 13:53:25.495683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.495695 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:25.495702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:25.495772 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:25.534582 1157708 cri.go:89] found id: ""
	I0318 13:53:25.534617 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.534626 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:25.534632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:25.534704 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:25.577526 1157708 cri.go:89] found id: ""
	I0318 13:53:25.577558 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.577566 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:25.577573 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:25.577687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:25.616403 1157708 cri.go:89] found id: ""
	I0318 13:53:25.616433 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.616445 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:25.616453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:25.616527 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:25.660444 1157708 cri.go:89] found id: ""
	I0318 13:53:25.660474 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.660482 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:25.660492 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:25.660506 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:25.715595 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:25.715641 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:25.730358 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:25.730390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:25.803153 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:25.803239 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:25.803261 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:25.885339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:25.885388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:25.150276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.151389 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.214042 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.710185 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.807095 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:29.807177 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:28.433506 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:28.449402 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:28.449481 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:28.490972 1157708 cri.go:89] found id: ""
	I0318 13:53:28.491007 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.491019 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:28.491028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:28.491094 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:28.531406 1157708 cri.go:89] found id: ""
	I0318 13:53:28.531439 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.531451 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:28.531460 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:28.531513 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:28.570299 1157708 cri.go:89] found id: ""
	I0318 13:53:28.570334 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.570345 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:28.570352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:28.570408 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:28.607950 1157708 cri.go:89] found id: ""
	I0318 13:53:28.607979 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.607987 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:28.607994 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:28.608066 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:28.648710 1157708 cri.go:89] found id: ""
	I0318 13:53:28.648744 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.648755 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:28.648762 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:28.648830 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:28.691071 1157708 cri.go:89] found id: ""
	I0318 13:53:28.691102 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.691114 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:28.691122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:28.691183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:28.734399 1157708 cri.go:89] found id: ""
	I0318 13:53:28.734438 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.734452 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:28.734461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:28.734548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:28.774859 1157708 cri.go:89] found id: ""
	I0318 13:53:28.774891 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.774902 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:28.774912 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:28.774927 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:28.831420 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:28.831459 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:28.847970 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:28.848008 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:28.926007 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:28.926034 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:28.926051 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:29.007525 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:29.007577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:31.555401 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:31.570964 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:31.571046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:31.611400 1157708 cri.go:89] found id: ""
	I0318 13:53:31.611427 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.611438 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:31.611445 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:31.611510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:31.654572 1157708 cri.go:89] found id: ""
	I0318 13:53:31.654602 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.654614 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:31.654622 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:31.654725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:31.692649 1157708 cri.go:89] found id: ""
	I0318 13:53:31.692673 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.692681 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:31.692686 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:31.692748 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:31.732208 1157708 cri.go:89] found id: ""
	I0318 13:53:31.732233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.732244 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:31.732253 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:31.732320 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:31.774132 1157708 cri.go:89] found id: ""
	I0318 13:53:31.774163 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.774172 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:31.774178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:31.774234 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:31.813558 1157708 cri.go:89] found id: ""
	I0318 13:53:31.813582 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.813590 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:31.813597 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:31.813651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:31.862024 1157708 cri.go:89] found id: ""
	I0318 13:53:31.862057 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.862070 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:31.862077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:31.862146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:31.903941 1157708 cri.go:89] found id: ""
	I0318 13:53:31.903972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.903982 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:31.903992 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:31.904006 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:31.957327 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:31.957366 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:31.973337 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:31.973380 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:32.053702 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:32.053730 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:32.053744 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:32.134859 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:32.134911 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:29.649648 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.651426 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.651936 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:30.208512 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:32.709020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.808276 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.811370 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:36.314374 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:34.683335 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:34.700383 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:34.700490 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:34.744387 1157708 cri.go:89] found id: ""
	I0318 13:53:34.744420 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.744432 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:34.744441 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:34.744509 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:34.788122 1157708 cri.go:89] found id: ""
	I0318 13:53:34.788150 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.788160 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:34.788166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:34.788221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:34.834760 1157708 cri.go:89] found id: ""
	I0318 13:53:34.834795 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.834808 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:34.834817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:34.834894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:34.882028 1157708 cri.go:89] found id: ""
	I0318 13:53:34.882062 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.882073 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:34.882081 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:34.882150 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:34.933339 1157708 cri.go:89] found id: ""
	I0318 13:53:34.933364 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.933374 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:34.933384 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:34.933451 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:34.972362 1157708 cri.go:89] found id: ""
	I0318 13:53:34.972395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.972407 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:34.972416 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:34.972486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:35.008949 1157708 cri.go:89] found id: ""
	I0318 13:53:35.008986 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.008999 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:35.009007 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:35.009080 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:35.054698 1157708 cri.go:89] found id: ""
	I0318 13:53:35.054733 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.054742 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:35.054756 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:35.054770 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:35.109391 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:35.109450 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:35.126785 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:35.126818 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:35.214303 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:35.214329 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:35.214342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:35.298705 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:35.298750 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:37.843701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:37.859330 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:37.859415 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:37.903428 1157708 cri.go:89] found id: ""
	I0318 13:53:37.903466 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.903479 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:37.903497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:37.903560 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:37.943687 1157708 cri.go:89] found id: ""
	I0318 13:53:37.943716 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.943727 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:37.943735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:37.943804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:37.986201 1157708 cri.go:89] found id: ""
	I0318 13:53:37.986233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.986244 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:37.986252 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:37.986322 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:36.151976 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.152281 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:35.209205 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:37.709122 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.806794 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.807552 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.026776 1157708 cri.go:89] found id: ""
	I0318 13:53:38.026813 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.026825 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:38.026832 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:38.026907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:38.073057 1157708 cri.go:89] found id: ""
	I0318 13:53:38.073088 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.073098 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:38.073105 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:38.073172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:38.110576 1157708 cri.go:89] found id: ""
	I0318 13:53:38.110611 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.110624 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:38.110632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:38.110702 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:38.154293 1157708 cri.go:89] found id: ""
	I0318 13:53:38.154319 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.154327 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:38.154338 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:38.154414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:38.195407 1157708 cri.go:89] found id: ""
	I0318 13:53:38.195434 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.195444 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:38.195454 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:38.195469 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:38.254159 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:38.254210 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:38.269143 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:38.269175 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:38.349819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:38.349845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:38.349864 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:38.435121 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:38.435164 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.982438 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:40.998483 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:40.998559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:41.037470 1157708 cri.go:89] found id: ""
	I0318 13:53:41.037497 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.037506 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:41.037512 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:41.037583 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:41.078428 1157708 cri.go:89] found id: ""
	I0318 13:53:41.078463 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.078473 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:41.078482 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:41.078548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:41.121342 1157708 cri.go:89] found id: ""
	I0318 13:53:41.121371 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.121382 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:41.121391 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:41.121482 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:41.164124 1157708 cri.go:89] found id: ""
	I0318 13:53:41.164149 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.164159 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:41.164167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:41.164229 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:41.210294 1157708 cri.go:89] found id: ""
	I0318 13:53:41.210321 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.210329 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:41.210336 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:41.210407 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:41.253934 1157708 cri.go:89] found id: ""
	I0318 13:53:41.253957 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.253967 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:41.253973 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:41.254039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:41.298817 1157708 cri.go:89] found id: ""
	I0318 13:53:41.298849 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.298861 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:41.298870 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:41.298936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:41.344109 1157708 cri.go:89] found id: ""
	I0318 13:53:41.344137 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.344146 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:41.344156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:41.344170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:41.401026 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:41.401061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:41.416197 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:41.416229 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:41.495349 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:41.495375 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:41.495393 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:41.578201 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:41.578253 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.651687 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:43.152619 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.208445 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.208613 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.210573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.808665 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:45.309099 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.126601 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:44.140971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:44.141048 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:44.184758 1157708 cri.go:89] found id: ""
	I0318 13:53:44.184786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.184794 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:44.184801 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:44.184851 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:44.230793 1157708 cri.go:89] found id: ""
	I0318 13:53:44.230824 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.230836 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:44.230842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:44.230916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:44.269561 1157708 cri.go:89] found id: ""
	I0318 13:53:44.269594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.269606 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:44.269614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:44.269680 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:44.310847 1157708 cri.go:89] found id: ""
	I0318 13:53:44.310878 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.310889 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:44.310898 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:44.310970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:44.350827 1157708 cri.go:89] found id: ""
	I0318 13:53:44.350860 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.350878 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:44.350887 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:44.350956 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:44.389693 1157708 cri.go:89] found id: ""
	I0318 13:53:44.389721 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.389730 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:44.389735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:44.389804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:44.429254 1157708 cri.go:89] found id: ""
	I0318 13:53:44.429280 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.429289 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:44.429303 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:44.429354 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:44.468484 1157708 cri.go:89] found id: ""
	I0318 13:53:44.468513 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.468525 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:44.468538 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:44.468555 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:44.525012 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:44.525058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:44.541638 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:44.541668 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:44.621779 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:44.621801 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:44.621814 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:44.706797 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:44.706884 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:47.253569 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:47.268808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:47.268888 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:47.313191 1157708 cri.go:89] found id: ""
	I0318 13:53:47.313220 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.313232 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:47.313240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:47.313307 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:47.357567 1157708 cri.go:89] found id: ""
	I0318 13:53:47.357600 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.357611 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:47.357619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:47.357688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:47.392300 1157708 cri.go:89] found id: ""
	I0318 13:53:47.392341 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.392352 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:47.392366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:47.392437 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:47.432800 1157708 cri.go:89] found id: ""
	I0318 13:53:47.432830 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.432842 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:47.432857 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:47.432921 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:47.469563 1157708 cri.go:89] found id: ""
	I0318 13:53:47.469591 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.469599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:47.469605 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:47.469668 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:47.508770 1157708 cri.go:89] found id: ""
	I0318 13:53:47.508799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.508810 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:47.508820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:47.508880 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:47.549876 1157708 cri.go:89] found id: ""
	I0318 13:53:47.549909 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.549921 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:47.549930 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:47.549997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:47.591385 1157708 cri.go:89] found id: ""
	I0318 13:53:47.591413 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.591421 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:47.591431 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:47.591446 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:47.646284 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:47.646313 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:47.662609 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:47.662639 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:47.737371 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:47.737398 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:47.737415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:47.817311 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:47.817342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:45.652845 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.150199 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:46.707734 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.709977 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:47.807238 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.308767 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:50.380029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:50.380109 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:50.427452 1157708 cri.go:89] found id: ""
	I0318 13:53:50.427484 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.427496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:50.427505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:50.427579 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:50.466766 1157708 cri.go:89] found id: ""
	I0318 13:53:50.466793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.466801 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:50.466808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:50.466894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:50.506768 1157708 cri.go:89] found id: ""
	I0318 13:53:50.506799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.506811 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:50.506819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:50.506882 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:50.545554 1157708 cri.go:89] found id: ""
	I0318 13:53:50.545592 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.545605 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:50.545613 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:50.545685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:50.583949 1157708 cri.go:89] found id: ""
	I0318 13:53:50.583984 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.583995 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:50.584004 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:50.584083 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:50.624730 1157708 cri.go:89] found id: ""
	I0318 13:53:50.624763 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.624774 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:50.624783 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:50.624853 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:50.664300 1157708 cri.go:89] found id: ""
	I0318 13:53:50.664346 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.664358 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:50.664366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:50.664420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:50.702760 1157708 cri.go:89] found id: ""
	I0318 13:53:50.702793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.702805 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:50.702817 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:50.702833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:50.757188 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:50.757237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:50.772151 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:50.772195 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:50.856872 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:50.856898 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:50.856917 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:50.937706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:50.937749 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:50.654814 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.151970 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.710233 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.209443 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:52.309529 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:54.809399 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.481836 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:53.497792 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:53.497856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:53.535376 1157708 cri.go:89] found id: ""
	I0318 13:53:53.535411 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.535420 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:53.535427 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:53.535486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:53.575002 1157708 cri.go:89] found id: ""
	I0318 13:53:53.575030 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.575042 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:53.575050 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:53.575119 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:53.615880 1157708 cri.go:89] found id: ""
	I0318 13:53:53.615919 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.615931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:53.615940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:53.616007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:53.681746 1157708 cri.go:89] found id: ""
	I0318 13:53:53.681786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.681799 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:53.681810 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:53.681887 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:53.725219 1157708 cri.go:89] found id: ""
	I0318 13:53:53.725241 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.725250 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:53.725256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:53.725317 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:53.766969 1157708 cri.go:89] found id: ""
	I0318 13:53:53.767006 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.767018 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:53.767026 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:53.767091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:53.802103 1157708 cri.go:89] found id: ""
	I0318 13:53:53.802134 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.802145 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:53.802157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:53.802210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:53.843054 1157708 cri.go:89] found id: ""
	I0318 13:53:53.843085 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.843093 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:53.843103 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:53.843117 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:53.899794 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:53.899836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:53.915559 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:53.915592 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:53.996410 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:53.996438 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:53.996456 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:54.085588 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:54.085628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:56.632201 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:56.648183 1157708 kubeadm.go:591] duration metric: took 4m3.550073086s to restartPrimaryControlPlane
	W0318 13:53:56.648381 1157708 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:53:56.648422 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:53:55.152626 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.650951 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:55.209511 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.709324 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.710029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.666187 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.017736279s)
	I0318 13:53:59.666270 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:53:59.682887 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:53:59.694626 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:53:59.706577 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:53:59.706599 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:53:59.706648 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:53:59.718311 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:53:59.718371 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:53:59.729298 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:53:59.741351 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:53:59.741401 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:53:59.753652 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.765642 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:53:59.765695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.778055 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:53:59.789994 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:53:59.790042 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:53:59.801292 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:53:59.879414 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:53:59.879516 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:00.046477 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:00.046660 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:00.046819 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:00.257070 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:00.259191 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:00.259333 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:00.259434 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:00.259549 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:00.259658 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:00.259782 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:00.259857 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:00.259949 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:00.260033 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:00.260136 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:00.260244 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:00.260299 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:00.260394 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:00.423400 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:00.543983 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:00.796108 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:00.901121 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:00.918891 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:00.920502 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:00.920642 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:01.094176 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:53:57.306878 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.308670 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:01.096397 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:54:01.096539 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:01.107816 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:01.108753 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:01.109641 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:01.111913 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:00.150985 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.151139 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.208577 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.209527 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.701940 1157416 pod_ready.go:81] duration metric: took 4m0.000915275s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:04.701995 1157416 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:04.702022 1157416 pod_ready.go:38] duration metric: took 4m12.048388069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:04.702063 1157416 kubeadm.go:591] duration metric: took 4m22.220919415s to restartPrimaryControlPlane
	W0318 13:54:04.702133 1157416 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:04.702168 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:01.807445 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.308435 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.151252 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.152296 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.162574 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.809148 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.811335 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:11.306999 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:10.650696 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:12.651741 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:13.308835 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.807754 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.150875 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:17.653698 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:18.308137 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.308720 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.152545 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.650685 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.807655 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:24.807765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:25.150664 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:27.650092 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:26.808311 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:29.311683 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:31.301320 1157887 pod_ready.go:81] duration metric: took 4m0.001048401s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:31.301351 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:31.301372 1157887 pod_ready.go:38] duration metric: took 4m12.063560637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:31.301397 1157887 kubeadm.go:591] duration metric: took 4m19.202321881s to restartPrimaryControlPlane
	W0318 13:54:31.301478 1157887 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:31.301505 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:29.651334 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:32.152059 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:34.651230 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.151130 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.018723 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.31652367s)
	I0318 13:54:37.018822 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:54:37.036348 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:54:37.047932 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:54:37.058846 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:54:37.058875 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:54:37.058920 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:54:37.069333 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:54:37.069396 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:54:37.080053 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:54:37.090110 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:54:37.090170 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:54:37.101032 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.111052 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:54:37.111124 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.121867 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:54:37.132057 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:54:37.132104 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:54:37.143057 1157416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:54:37.368813 1157416 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:54:41.111826 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:54:41.111977 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:41.112236 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:39.151250 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:41.652026 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:43.652929 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.082340 1157416 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 13:54:46.082410 1157416 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:46.082482 1157416 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:46.082561 1157416 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:46.082639 1157416 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:46.082692 1157416 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:46.084374 1157416 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:46.084495 1157416 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:46.084584 1157416 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:46.084681 1157416 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:46.084767 1157416 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:46.084844 1157416 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:46.084933 1157416 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:46.085039 1157416 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:46.085131 1157416 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:46.085255 1157416 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:46.085344 1157416 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:46.085415 1157416 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:46.085491 1157416 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:46.085569 1157416 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:46.085637 1157416 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 13:54:46.085704 1157416 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:46.085791 1157416 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:46.085894 1157416 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:46.086010 1157416 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:46.086104 1157416 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:54:46.087481 1157416 out.go:204]   - Booting up control plane ...
	I0318 13:54:46.087576 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:46.087642 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:46.087698 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:46.087782 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:46.087865 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:46.087917 1157416 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:46.088051 1157416 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:46.088146 1157416 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003020 seconds
	I0318 13:54:46.088306 1157416 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:54:46.088501 1157416 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:54:46.088585 1157416 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:54:46.088770 1157416 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-537236 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:54:46.088826 1157416 kubeadm.go:309] [bootstrap-token] Using token: fk6yfh.vd0dmh72kd97vm2h
	I0318 13:54:46.091265 1157416 out.go:204]   - Configuring RBAC rules ...
	I0318 13:54:46.091375 1157416 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:54:46.091449 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:54:46.091656 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:54:46.091839 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:54:46.092014 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:54:46.092136 1157416 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:54:46.092289 1157416 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:54:46.092370 1157416 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:54:46.092436 1157416 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:54:46.092445 1157416 kubeadm.go:309] 
	I0318 13:54:46.092513 1157416 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:54:46.092522 1157416 kubeadm.go:309] 
	I0318 13:54:46.092588 1157416 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:54:46.092594 1157416 kubeadm.go:309] 
	I0318 13:54:46.092614 1157416 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:54:46.092704 1157416 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:54:46.092749 1157416 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:54:46.092755 1157416 kubeadm.go:309] 
	I0318 13:54:46.092805 1157416 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:54:46.092818 1157416 kubeadm.go:309] 
	I0318 13:54:46.092892 1157416 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:54:46.092906 1157416 kubeadm.go:309] 
	I0318 13:54:46.092982 1157416 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:54:46.093100 1157416 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:54:46.093212 1157416 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:54:46.093225 1157416 kubeadm.go:309] 
	I0318 13:54:46.093335 1157416 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:54:46.093448 1157416 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:54:46.093457 1157416 kubeadm.go:309] 
	I0318 13:54:46.093539 1157416 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.093684 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:54:46.093717 1157416 kubeadm.go:309] 	--control-plane 
	I0318 13:54:46.093723 1157416 kubeadm.go:309] 
	I0318 13:54:46.093848 1157416 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:54:46.093860 1157416 kubeadm.go:309] 
	I0318 13:54:46.093946 1157416 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.094071 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:54:46.094105 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:54:46.094119 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:54:46.095717 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:54:46.112502 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:46.112797 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:46.152713 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:48.651676 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.096953 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:54:46.127007 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:54:46.178588 1157416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:54:46.178768 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:46.178785 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-537236 minikube.k8s.io/updated_at=2024_03_18T13_54_46_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=no-preload-537236 minikube.k8s.io/primary=true
	I0318 13:54:46.231974 1157416 ops.go:34] apiserver oom_adj: -16
	I0318 13:54:46.582048 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.082295 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.582447 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.082146 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.583155 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.082463 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.583104 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.153753 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:53.654740 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:50.082163 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:50.582159 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.082921 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.582616 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.082686 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.582520 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.082920 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.582281 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.082711 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.582110 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.112956 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:56.113210 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:55.082805 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:55.583034 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.082777 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.582491 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.082739 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.582854 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.082715 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.189802 1157416 kubeadm.go:1107] duration metric: took 12.011111335s to wait for elevateKubeSystemPrivileges
	W0318 13:54:58.189865 1157416 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:54:58.189878 1157416 kubeadm.go:393] duration metric: took 5m15.77131157s to StartCluster
	I0318 13:54:58.189991 1157416 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.190130 1157416 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:54:58.191965 1157416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.192315 1157416 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:54:58.194158 1157416 out.go:177] * Verifying Kubernetes components...
	I0318 13:54:58.192460 1157416 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:54:58.192549 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:54:58.194270 1157416 addons.go:69] Setting storage-provisioner=true in profile "no-preload-537236"
	I0318 13:54:58.195604 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:54:58.195628 1157416 addons.go:234] Setting addon storage-provisioner=true in "no-preload-537236"
	W0318 13:54:58.195646 1157416 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:54:58.194275 1157416 addons.go:69] Setting default-storageclass=true in profile "no-preload-537236"
	I0318 13:54:58.195741 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.195748 1157416 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-537236"
	I0318 13:54:58.194278 1157416 addons.go:69] Setting metrics-server=true in profile "no-preload-537236"
	I0318 13:54:58.195816 1157416 addons.go:234] Setting addon metrics-server=true in "no-preload-537236"
	W0318 13:54:58.195835 1157416 addons.go:243] addon metrics-server should already be in state true
	I0318 13:54:58.195864 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.196133 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196177 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196187 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196224 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196236 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196256 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.218212 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0318 13:54:58.218703 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34827
	I0318 13:54:58.218934 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0318 13:54:58.219717 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.219858 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220143 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220417 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220443 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220478 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220497 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220628 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220650 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220882 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220950 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220973 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.221491 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.221527 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.221736 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.222116 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.222138 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.226247 1157416 addons.go:234] Setting addon default-storageclass=true in "no-preload-537236"
	W0318 13:54:58.226271 1157416 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:54:58.226303 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.226691 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.226719 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.238772 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0318 13:54:58.239288 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.239925 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.239954 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.240375 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.240581 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.241297 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0318 13:54:58.241774 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.242300 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.242321 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.242787 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.243001 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.243033 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.245371 1157416 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:54:58.245038 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.246964 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:54:58.246981 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:54:58.246429 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0318 13:54:58.247010 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.248738 1157416 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:54:54.143902 1157263 pod_ready.go:81] duration metric: took 4m0.000627482s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:54.143947 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:54.143967 1157263 pod_ready.go:38] duration metric: took 4m9.565422592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:54.143994 1157263 kubeadm.go:591] duration metric: took 4m17.754456341s to restartPrimaryControlPlane
	W0318 13:54:54.144061 1157263 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:54.144092 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:58.247424 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.250418 1157416 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.250441 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:54:58.250459 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.250666 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.250683 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.250733 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251012 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.251354 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.251384 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251730 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.252053 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.252082 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.252627 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.252823 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.252974 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.253647 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254073 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.254102 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254393 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.254599 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.254720 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.254858 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.275785 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0318 13:54:58.276467 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.277007 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.277037 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.277396 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.277594 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.279419 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.279699 1157416 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.279719 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:54:58.279740 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.282813 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283168 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.283198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283319 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.283505 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.283643 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.283826 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.433881 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:54:58.466338 1157416 node_ready.go:35] waiting up to 6m0s for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485186 1157416 node_ready.go:49] node "no-preload-537236" has status "Ready":"True"
	I0318 13:54:58.485217 1157416 node_ready.go:38] duration metric: took 18.833477ms for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485230 1157416 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:58.527030 1157416 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545133 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.545175 1157416 pod_ready.go:81] duration metric: took 18.11215ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545191 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560108 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.560144 1157416 pod_ready.go:81] duration metric: took 14.943161ms for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560159 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.562894 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:54:58.562924 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:54:58.572477 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.572510 1157416 pod_ready.go:81] duration metric: took 12.342242ms for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.572523 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.594618 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.597140 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.644132 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:54:58.644166 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:54:58.734467 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:58.734499 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:54:58.760623 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:59.005259 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005305 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005668 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005692 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.005704 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005713 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005981 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005996 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.006028 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.020654 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.020682 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.022812 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.022814 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.022850 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.979647 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.382455448s)
	I0318 13:54:59.979723 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.979743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980124 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980223 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.980258 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.980281 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.980354 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980675 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980756 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.982424 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.270401 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.509719085s)
	I0318 13:55:00.270464 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.270481 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.272779 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.272794 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.272817 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.272828 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.272837 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.274705 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.274734 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.274759 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.274789 1157416 addons.go:470] Verifying addon metrics-server=true in "no-preload-537236"
	I0318 13:55:00.276931 1157416 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 13:55:00.278586 1157416 addons.go:505] duration metric: took 2.086117916s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 13:55:00.607578 1157416 pod_ready.go:92] pod "kube-proxy-6c4c5" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.607607 1157416 pod_ready.go:81] duration metric: took 2.035076209s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.607620 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626505 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.626531 1157416 pod_ready.go:81] duration metric: took 18.904572ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626540 1157416 pod_ready.go:38] duration metric: took 2.141296876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:00.626556 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:00.626612 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:00.677379 1157416 api_server.go:72] duration metric: took 2.484994048s to wait for apiserver process to appear ...
	I0318 13:55:00.677406 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:00.677426 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:55:00.694161 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:55:00.696445 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:55:00.696479 1157416 api_server.go:131] duration metric: took 19.065082ms to wait for apiserver health ...
	I0318 13:55:00.696492 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:00.707383 1157416 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:00.707417 1157416 system_pods.go:61] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:00.707421 1157416 system_pods.go:61] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:00.707425 1157416 system_pods.go:61] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:00.707429 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:00.707432 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:00.707435 1157416 system_pods.go:61] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:00.707438 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:00.707445 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:00.707450 1157416 system_pods.go:61] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:00.707459 1157416 system_pods.go:74] duration metric: took 10.96036ms to wait for pod list to return data ...
	I0318 13:55:00.707467 1157416 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:00.870267 1157416 default_sa.go:45] found service account: "default"
	I0318 13:55:00.870299 1157416 default_sa.go:55] duration metric: took 162.825175ms for default service account to be created ...
	I0318 13:55:00.870310 1157416 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:01.073950 1157416 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:01.073985 1157416 system_pods.go:89] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:01.073992 1157416 system_pods.go:89] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:01.073998 1157416 system_pods.go:89] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:01.074004 1157416 system_pods.go:89] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:01.074010 1157416 system_pods.go:89] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:01.074017 1157416 system_pods.go:89] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:01.074035 1157416 system_pods.go:89] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:01.074055 1157416 system_pods.go:89] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:01.074069 1157416 system_pods.go:89] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:01.074085 1157416 system_pods.go:126] duration metric: took 203.766894ms to wait for k8s-apps to be running ...
	I0318 13:55:01.074100 1157416 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:01.074152 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:01.091165 1157416 system_svc.go:56] duration metric: took 17.056217ms WaitForService to wait for kubelet
	I0318 13:55:01.091195 1157416 kubeadm.go:576] duration metric: took 2.898817514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:01.091224 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:01.270664 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:01.270724 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:01.270737 1157416 node_conditions.go:105] duration metric: took 179.506857ms to run NodePressure ...
	I0318 13:55:01.270750 1157416 start.go:240] waiting for startup goroutines ...
	I0318 13:55:01.270758 1157416 start.go:245] waiting for cluster config update ...
	I0318 13:55:01.270769 1157416 start.go:254] writing updated cluster config ...
	I0318 13:55:01.271069 1157416 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:01.325353 1157416 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 13:55:01.327367 1157416 out.go:177] * Done! kubectl is now configured to use "no-preload-537236" cluster and "default" namespace by default
	I0318 13:55:03.715412 1157887 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.413874479s)
	I0318 13:55:03.715519 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:03.732767 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:03.743375 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:03.753393 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:03.753414 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:03.753457 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:55:03.763226 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:03.763289 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:03.774001 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:55:03.783943 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:03.783991 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:03.794580 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.803881 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:03.803921 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.813709 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:55:03.823096 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:03.823138 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:03.832790 1157887 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:03.891459 1157887 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:03.891672 1157887 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:04.056923 1157887 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:04.057055 1157887 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:04.057197 1157887 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:04.312932 1157887 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:04.314955 1157887 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:04.315063 1157887 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:04.315156 1157887 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:04.315286 1157887 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:04.315388 1157887 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:04.315490 1157887 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:04.315568 1157887 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:04.315668 1157887 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:04.315743 1157887 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:04.315844 1157887 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:04.315969 1157887 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:04.316034 1157887 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:04.316108 1157887 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:04.643155 1157887 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:04.927731 1157887 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:05.058875 1157887 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:05.221520 1157887 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:05.221985 1157887 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:05.224297 1157887 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:05.226200 1157887 out.go:204]   - Booting up control plane ...
	I0318 13:55:05.226326 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:05.226425 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:05.226520 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:05.244878 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:05.245461 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:05.245531 1157887 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:05.388215 1157887 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:11.393083 1157887 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004356 seconds
	I0318 13:55:11.393511 1157887 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:11.412586 1157887 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:11.939563 1157887 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:11.939844 1157887 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-569210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:12.457349 1157887 kubeadm.go:309] [bootstrap-token] Using token: z44dyw.tsw47dmn862zavdi
	I0318 13:55:12.458855 1157887 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:12.459037 1157887 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:12.466850 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:12.482822 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:12.488920 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:12.496947 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:12.507954 1157887 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:12.535337 1157887 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:12.763814 1157887 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:12.877248 1157887 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:12.878047 1157887 kubeadm.go:309] 
	I0318 13:55:12.878159 1157887 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:12.878183 1157887 kubeadm.go:309] 
	I0318 13:55:12.878291 1157887 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:12.878301 1157887 kubeadm.go:309] 
	I0318 13:55:12.878334 1157887 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:12.878432 1157887 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:12.878519 1157887 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:12.878531 1157887 kubeadm.go:309] 
	I0318 13:55:12.878603 1157887 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:12.878615 1157887 kubeadm.go:309] 
	I0318 13:55:12.878690 1157887 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:12.878703 1157887 kubeadm.go:309] 
	I0318 13:55:12.878762 1157887 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:12.878858 1157887 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:12.878974 1157887 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:12.878985 1157887 kubeadm.go:309] 
	I0318 13:55:12.879087 1157887 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:12.879164 1157887 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:12.879171 1157887 kubeadm.go:309] 
	I0318 13:55:12.879275 1157887 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879410 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:12.879464 1157887 kubeadm.go:309] 	--control-plane 
	I0318 13:55:12.879484 1157887 kubeadm.go:309] 
	I0318 13:55:12.879576 1157887 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:12.879586 1157887 kubeadm.go:309] 
	I0318 13:55:12.879719 1157887 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879871 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:12.883383 1157887 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:12.883432 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:55:12.883447 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:12.885248 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:12.886708 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:12.929444 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:13.043416 1157887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:13.043541 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.043567 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-569210 minikube.k8s.io/updated_at=2024_03_18T13_55_13_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=default-k8s-diff-port-569210 minikube.k8s.io/primary=true
	I0318 13:55:13.064927 1157887 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:13.286093 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.786780 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.286728 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.786442 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.287103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.786443 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.287138 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.113672 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:16.113963 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:16.787069 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.286490 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.786317 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.286840 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.786872 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.286911 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.786554 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.286216 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.786282 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.286590 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.787103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.286966 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.786928 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.286275 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.786464 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.286791 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.787028 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.938400 1157887 kubeadm.go:1107] duration metric: took 11.894943444s to wait for elevateKubeSystemPrivileges
	W0318 13:55:24.938440 1157887 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:24.938448 1157887 kubeadm.go:393] duration metric: took 5m12.933246555s to StartCluster
	I0318 13:55:24.938470 1157887 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.938621 1157887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:24.940984 1157887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.941286 1157887 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:24.943151 1157887 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:24.941329 1157887 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:24.941469 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:24.944770 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:24.944780 1157887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944830 1157887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.944845 1157887 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:24.944846 1157887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944851 1157887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944880 1157887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:24.944888 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	W0318 13:55:24.944897 1157887 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:24.944927 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.944881 1157887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-569210"
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945350 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945375 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945400 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945460 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.963173 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0318 13:55:24.963820 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.964695 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.964725 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.965120 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.965696 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.965735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.965976 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0318 13:55:24.966207 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0318 13:55:24.966502 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.966598 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.967058 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967062 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967083 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967100 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967467 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967603 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967671 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.968107 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.968146 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.971673 1157887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.971696 1157887 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:24.971729 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.972091 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.972129 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.986041 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0318 13:55:24.986481 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.986989 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.987009 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.987352 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.987605 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0318 13:55:24.987613 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.988061 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.988481 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.988499 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.988904 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.989082 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.989785 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.992033 1157887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:24.990673 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.991225 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0318 13:55:24.993532 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:24.993557 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:24.993587 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.995449 1157887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:24.994077 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.996749 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997153 1157887 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:24.997171 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:24.997191 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.997431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:24.997463 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:24.997466 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997665 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.997684 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.997746 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:24.998183 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.998273 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:24.998497 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:24.998701 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.998735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.999951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.000454 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000676 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.000865 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.001021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.001160 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.016442 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
	I0318 13:55:25.016827 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:25.017300 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:25.017328 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:25.017686 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:25.017906 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:25.019440 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:25.019694 1157887 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.019711 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:25.019731 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:25.022079 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022370 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.022398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022497 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.022645 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.022762 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.022937 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.188474 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:25.208092 1157887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218757 1157887 node_ready.go:49] node "default-k8s-diff-port-569210" has status "Ready":"True"
	I0318 13:55:25.218789 1157887 node_ready.go:38] duration metric: took 10.658955ms for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218829 1157887 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:25.224381 1157887 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235938 1157887 pod_ready.go:92] pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.235962 1157887 pod_ready.go:81] duration metric: took 11.550686ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235971 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.242985 1157887 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.243014 1157887 pod_ready.go:81] duration metric: took 7.034818ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.243027 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255777 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.255801 1157887 pod_ready.go:81] duration metric: took 12.766918ms for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255811 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.301824 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:25.301846 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:25.330301 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:25.348473 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:25.348500 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:25.365746 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.398074 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:25.398099 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:25.423951 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:27.292115 1157887 pod_ready.go:92] pod "kube-proxy-2pp8z" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.292202 1157887 pod_ready.go:81] duration metric: took 2.036383518s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.292227 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299705 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.299732 1157887 pod_ready.go:81] duration metric: took 7.486631ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299743 1157887 pod_ready.go:38] duration metric: took 2.08090143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:27.299762 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:27.299824 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:27.706241 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.375885124s)
	I0318 13:55:27.706314 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706326 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706330 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.340547601s)
	I0318 13:55:27.706377 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706392 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706630 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.282631636s)
	I0318 13:55:27.706900 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.706828 1157887 api_server.go:72] duration metric: took 2.765497711s to wait for apiserver process to appear ...
	I0318 13:55:27.706940 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:27.706879 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.706979 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.706996 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707024 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706916 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706985 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:55:27.707343 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707366 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707372 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.707405 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707417 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707426 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707455 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.707682 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707696 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707706 1157887 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:27.708614 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.708664 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.708694 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.708783 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.709092 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.709151 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.709175 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.718110 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:55:27.719497 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:27.719518 1157887 api_server.go:131] duration metric: took 12.563372ms to wait for apiserver health ...
	I0318 13:55:27.719526 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:27.739882 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.739914 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.740263 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.740296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.740318 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.742102 1157887 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0318 13:55:27.368024 1157263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.223901258s)
	I0318 13:55:27.368118 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.388474 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:27.402749 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:27.417121 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:27.417184 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:27.417235 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:27.429920 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:27.429997 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:27.442468 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:27.454842 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:27.454913 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:27.467911 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.480201 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:27.480272 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.496430 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:27.512020 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:27.512092 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:27.528102 1157263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:27.601072 1157263 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:27.601235 1157263 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:27.796445 1157263 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:27.796574 1157263 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:27.796730 1157263 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:28.079026 1157263 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:27.743429 1157887 addons.go:505] duration metric: took 2.802098895s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0318 13:55:27.744694 1157887 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:27.744727 1157887 system_pods.go:61] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.744733 1157887 system_pods.go:61] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.744738 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.744744 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.744750 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.744756 1157887 system_pods.go:61] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.744764 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.744777 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.744783 1157887 system_pods.go:61] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending
	I0318 13:55:27.744797 1157887 system_pods.go:74] duration metric: took 25.264322ms to wait for pod list to return data ...
	I0318 13:55:27.744810 1157887 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:27.755398 1157887 default_sa.go:45] found service account: "default"
	I0318 13:55:27.755427 1157887 default_sa.go:55] duration metric: took 10.607153ms for default service account to be created ...
	I0318 13:55:27.755439 1157887 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:27.815477 1157887 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:27.815507 1157887 system_pods.go:89] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.815512 1157887 system_pods.go:89] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.815517 1157887 system_pods.go:89] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.815521 1157887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.815526 1157887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.815529 1157887 system_pods.go:89] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.815533 1157887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.815540 1157887 system_pods.go:89] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.815546 1157887 system_pods.go:89] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:27.815557 1157887 system_pods.go:126] duration metric: took 60.111832ms to wait for k8s-apps to be running ...
	I0318 13:55:27.815566 1157887 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:27.815610 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.834266 1157887 system_svc.go:56] duration metric: took 18.687554ms WaitForService to wait for kubelet
	I0318 13:55:27.834304 1157887 kubeadm.go:576] duration metric: took 2.892974502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:27.834345 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:28.013031 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:28.013095 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:28.013148 1157887 node_conditions.go:105] duration metric: took 178.79502ms to run NodePressure ...
	I0318 13:55:28.013169 1157887 start.go:240] waiting for startup goroutines ...
	I0318 13:55:28.013181 1157887 start.go:245] waiting for cluster config update ...
	I0318 13:55:28.013199 1157887 start.go:254] writing updated cluster config ...
	I0318 13:55:28.013519 1157887 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:28.092810 1157887 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:28.095783 1157887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-569210" cluster and "default" namespace by default
	I0318 13:55:28.080939 1157263 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:28.081056 1157263 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:28.081145 1157263 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:28.081249 1157263 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:28.082078 1157263 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:28.082860 1157263 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:28.083397 1157263 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:28.084597 1157263 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:28.084941 1157263 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:28.085603 1157263 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:28.086461 1157263 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:28.087265 1157263 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:28.087343 1157263 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:28.348996 1157263 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:28.516513 1157263 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:28.585513 1157263 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:28.817150 1157263 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:28.817900 1157263 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:28.820280 1157263 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:28.822114 1157263 out.go:204]   - Booting up control plane ...
	I0318 13:55:28.822217 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:28.822811 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:28.825310 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:28.845906 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:28.847013 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:28.847069 1157263 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:28.992421 1157263 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:35.495384 1157263 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502688 seconds
	I0318 13:55:35.495578 1157263 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:35.517088 1157263 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:36.049915 1157263 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:36.050163 1157263 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-173036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:36.571450 1157263 kubeadm.go:309] [bootstrap-token] Using token: a1fi6l.v36l7wrnalucsepl
	I0318 13:55:36.573263 1157263 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:36.573448 1157263 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:36.581322 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:36.594853 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:36.598538 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:36.602430 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:36.605534 1157263 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:36.621332 1157263 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:36.865518 1157263 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:36.990015 1157263 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:36.991079 1157263 kubeadm.go:309] 
	I0318 13:55:36.991168 1157263 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:36.991181 1157263 kubeadm.go:309] 
	I0318 13:55:36.991288 1157263 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:36.991299 1157263 kubeadm.go:309] 
	I0318 13:55:36.991320 1157263 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:36.991395 1157263 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:36.991475 1157263 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:36.991494 1157263 kubeadm.go:309] 
	I0318 13:55:36.991572 1157263 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:36.991581 1157263 kubeadm.go:309] 
	I0318 13:55:36.991646 1157263 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:36.991658 1157263 kubeadm.go:309] 
	I0318 13:55:36.991737 1157263 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:36.991839 1157263 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:36.991954 1157263 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:36.991966 1157263 kubeadm.go:309] 
	I0318 13:55:36.992073 1157263 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:36.992174 1157263 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:36.992186 1157263 kubeadm.go:309] 
	I0318 13:55:36.992304 1157263 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992477 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:36.992522 1157263 kubeadm.go:309] 	--control-plane 
	I0318 13:55:36.992532 1157263 kubeadm.go:309] 
	I0318 13:55:36.992642 1157263 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:36.992656 1157263 kubeadm.go:309] 
	I0318 13:55:36.992769 1157263 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992922 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:36.994542 1157263 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:36.994648 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:55:36.994660 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:36.996526 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:36.997929 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:37.047757 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:37.075078 1157263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:37.075167 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.075199 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-173036 minikube.k8s.io/updated_at=2024_03_18T13_55_37_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=embed-certs-173036 minikube.k8s.io/primary=true
	I0318 13:55:37.236857 1157263 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:37.422453 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.922622 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.423527 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.922743 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.422721 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.923438 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.422599 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.923170 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.422812 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.922526 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.422594 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.922835 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.423479 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.923114 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.422672 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.922883 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.422863 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.922770 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.423473 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.923125 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.423378 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.923366 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.422566 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.923231 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.422505 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.554542 1157263 kubeadm.go:1107] duration metric: took 12.479441091s to wait for elevateKubeSystemPrivileges
	W0318 13:55:49.554590 1157263 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:49.554602 1157263 kubeadm.go:393] duration metric: took 5m13.226983757s to StartCluster
	I0318 13:55:49.554626 1157263 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.554778 1157263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:49.556962 1157263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.557273 1157263 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:49.558774 1157263 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:49.557321 1157263 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:49.557488 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:49.560195 1157263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-173036"
	I0318 13:55:49.560201 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:49.560211 1157263 addons.go:69] Setting metrics-server=true in profile "embed-certs-173036"
	I0318 13:55:49.560237 1157263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-173036"
	I0318 13:55:49.560247 1157263 addons.go:234] Setting addon metrics-server=true in "embed-certs-173036"
	W0318 13:55:49.560254 1157263 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:49.560201 1157263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-173036"
	I0318 13:55:49.560282 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560302 1157263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-173036"
	W0318 13:55:49.560317 1157263 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:49.560388 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560644 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560676 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560678 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560716 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560777 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560803 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.577682 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0318 13:55:49.577714 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0318 13:55:49.578101 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0318 13:55:49.578261 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578285 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578493 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578880 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578907 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.578882 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578923 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579013 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.579036 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579302 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579333 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579538 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.579598 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579914 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.579955 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.580203 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.580238 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.583587 1157263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-173036"
	W0318 13:55:49.583610 1157263 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:49.583641 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.584009 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.584040 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.596862 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0318 13:55:49.597356 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.597859 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.598026 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.598110 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0318 13:55:49.598635 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599310 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.599331 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.599405 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0318 13:55:49.599732 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599874 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.600120 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.600135 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.600197 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.600439 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.601019 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.601052 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.602172 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.604115 1157263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:49.606034 1157263 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.606049 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:49.606065 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.603277 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.606323 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.608600 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.610213 1157263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:49.611511 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:49.611531 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:49.611545 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.609758 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.611598 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.611613 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.610550 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.611727 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.611868 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.611991 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.614689 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.615322 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.615403 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615531 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.615672 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.615773 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.620257 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0318 13:55:49.620653 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.621225 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.621243 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.621610 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.621790 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.623303 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.623566 1157263 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:49.623580 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:49.623594 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.626325 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.626733 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.626755 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.627028 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.627196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.627335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.627441 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.791524 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:49.847829 1157263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860595 1157263 node_ready.go:49] node "embed-certs-173036" has status "Ready":"True"
	I0318 13:55:49.860621 1157263 node_ready.go:38] duration metric: took 12.757412ms for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860631 1157263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:49.870524 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:49.917170 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:49.917197 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:49.965845 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:49.965871 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:49.969600 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.982887 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:50.023768 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:50.023795 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:50.139120 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:51.877589 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-ft594" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:51.877618 1157263 pod_ready.go:81] duration metric: took 2.007066644s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:51.877634 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.007908 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.02498147s)
	I0318 13:55:52.007966 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.007979 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008318 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008378 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008383 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.008408 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.008427 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008713 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008853 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.009491 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.039858476s)
	I0318 13:55:52.009567 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.009595 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010239 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010242 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010276 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.010289 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.010301 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010553 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010568 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010578 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.026035 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.026056 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.026364 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.026385 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.202596 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.063427726s)
	I0318 13:55:52.202663 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.202686 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.202999 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203021 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203032 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.203040 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.203321 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203338 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203352 1157263 addons.go:470] Verifying addon metrics-server=true in "embed-certs-173036"
	I0318 13:55:52.205372 1157263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 13:55:52.207184 1157263 addons.go:505] duration metric: took 2.649872416s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 13:55:52.391839 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.391878 1157263 pod_ready.go:81] duration metric: took 514.235543ms for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.391891 1157263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398044 1157263 pod_ready.go:92] pod "etcd-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.398075 1157263 pod_ready.go:81] duration metric: took 6.176672ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398091 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403790 1157263 pod_ready.go:92] pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.403809 1157263 pod_ready.go:81] duration metric: took 5.70927ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403817 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414956 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.414976 1157263 pod_ready.go:81] duration metric: took 11.153442ms for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414986 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674125 1157263 pod_ready.go:92] pod "kube-proxy-lp9mc" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.674151 1157263 pod_ready.go:81] duration metric: took 259.158776ms for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674160 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075385 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:53.075420 1157263 pod_ready.go:81] duration metric: took 401.251175ms for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075432 1157263 pod_ready.go:38] duration metric: took 3.214790175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:53.075452 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:53.075523 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:53.092916 1157263 api_server.go:72] duration metric: took 3.53560403s to wait for apiserver process to appear ...
	I0318 13:55:53.092948 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:53.093027 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:55:53.098715 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:55:53.100073 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:53.100102 1157263 api_server.go:131] duration metric: took 7.134408ms to wait for apiserver health ...
	I0318 13:55:53.100113 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:53.278961 1157263 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:53.278993 1157263 system_pods.go:61] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.278998 1157263 system_pods.go:61] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.279002 1157263 system_pods.go:61] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.279005 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.279010 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.279013 1157263 system_pods.go:61] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.279017 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.279023 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.279026 1157263 system_pods.go:61] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.279037 1157263 system_pods.go:74] duration metric: took 178.915393ms to wait for pod list to return data ...
	I0318 13:55:53.279047 1157263 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:53.475094 1157263 default_sa.go:45] found service account: "default"
	I0318 13:55:53.475123 1157263 default_sa.go:55] duration metric: took 196.069593ms for default service account to be created ...
	I0318 13:55:53.475133 1157263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:53.678384 1157263 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:53.678413 1157263 system_pods.go:89] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.678418 1157263 system_pods.go:89] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.678422 1157263 system_pods.go:89] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.678427 1157263 system_pods.go:89] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.678431 1157263 system_pods.go:89] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.678436 1157263 system_pods.go:89] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.678439 1157263 system_pods.go:89] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.678447 1157263 system_pods.go:89] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.678455 1157263 system_pods.go:89] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.678464 1157263 system_pods.go:126] duration metric: took 203.32588ms to wait for k8s-apps to be running ...
	I0318 13:55:53.678473 1157263 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:53.678531 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:53.698244 1157263 system_svc.go:56] duration metric: took 19.758793ms WaitForService to wait for kubelet
	I0318 13:55:53.698279 1157263 kubeadm.go:576] duration metric: took 4.140974066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:53.698307 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:53.876137 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:53.876162 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:53.876173 1157263 node_conditions.go:105] duration metric: took 177.861272ms to run NodePressure ...
	I0318 13:55:53.876184 1157263 start.go:240] waiting for startup goroutines ...
	I0318 13:55:53.876191 1157263 start.go:245] waiting for cluster config update ...
	I0318 13:55:53.876202 1157263 start.go:254] writing updated cluster config ...
	I0318 13:55:53.876907 1157263 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:53.931596 1157263 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:53.933499 1157263 out.go:177] * Done! kubectl is now configured to use "embed-certs-173036" cluster and "default" namespace by default
	I0318 13:55:56.115397 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:56.115674 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:56.115714 1157708 kubeadm.go:309] 
	I0318 13:55:56.115782 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:55:56.115840 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:55:56.115849 1157708 kubeadm.go:309] 
	I0318 13:55:56.115908 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:55:56.115979 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:55:56.116102 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:55:56.116112 1157708 kubeadm.go:309] 
	I0318 13:55:56.116242 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:55:56.116289 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:55:56.116349 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:55:56.116370 1157708 kubeadm.go:309] 
	I0318 13:55:56.116506 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:55:56.116645 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:55:56.116665 1157708 kubeadm.go:309] 
	I0318 13:55:56.116804 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:55:56.116897 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:55:56.117005 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:55:56.117094 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:55:56.117110 1157708 kubeadm.go:309] 
	I0318 13:55:56.117680 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:56.117813 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:55:56.117934 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 13:55:56.118052 1157708 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 13:55:56.118124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:55:57.920938 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.802776126s)
	I0318 13:55:57.921031 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:57.939226 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:57.952304 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:57.952342 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:57.952404 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:57.964632 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:57.964695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:57.977306 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:57.989728 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:57.989790 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:58.001661 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.013078 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:58.013160 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.024891 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:58.036171 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:58.036225 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:58.048156 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:58.128356 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:55:58.128445 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:58.297704 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:58.297897 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:58.298048 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:58.515521 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:58.517569 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:58.517679 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:58.517760 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:58.517830 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:58.517908 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:58.517980 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:58.518047 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:58.518280 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:58.519078 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:58.520081 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:58.521268 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:58.521861 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:58.521936 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:58.762418 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:58.999746 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:59.214448 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:59.402662 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:59.421555 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:59.423151 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:59.423233 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:59.560412 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:59.563125 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:55:59.563274 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:59.571364 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:59.572936 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:59.573987 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:59.586689 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:56:39.588627 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:56:39.588942 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:39.589128 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:44.589564 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:44.589852 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:54.590311 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:54.590619 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:14.591571 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:14.591866 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594170 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:54.594433 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594448 1157708 kubeadm.go:309] 
	I0318 13:57:54.594490 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:57:54.594540 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:57:54.594549 1157708 kubeadm.go:309] 
	I0318 13:57:54.594594 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:57:54.594641 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:57:54.594800 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:57:54.594811 1157708 kubeadm.go:309] 
	I0318 13:57:54.594950 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:57:54.595000 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:57:54.595046 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:57:54.595056 1157708 kubeadm.go:309] 
	I0318 13:57:54.595163 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:57:54.595297 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:57:54.595312 1157708 kubeadm.go:309] 
	I0318 13:57:54.595471 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:57:54.595605 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:57:54.595716 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:57:54.595812 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:57:54.595827 1157708 kubeadm.go:309] 
	I0318 13:57:54.596636 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:57:54.596805 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:57:54.596972 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 13:57:54.597014 1157708 kubeadm.go:393] duration metric: took 8m1.551231902s to StartCluster
	I0318 13:57:54.597076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:57:54.597174 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:57:54.649451 1157708 cri.go:89] found id: ""
	I0318 13:57:54.649484 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.649496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:57:54.649506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:57:54.649577 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:57:54.692278 1157708 cri.go:89] found id: ""
	I0318 13:57:54.692317 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.692339 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:57:54.692349 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:57:54.692427 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:57:54.731034 1157708 cri.go:89] found id: ""
	I0318 13:57:54.731062 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.731071 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:57:54.731077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:57:54.731135 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:57:54.769883 1157708 cri.go:89] found id: ""
	I0318 13:57:54.769913 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.769923 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:57:54.769931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:57:54.769996 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:57:54.808620 1157708 cri.go:89] found id: ""
	I0318 13:57:54.808648 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.808656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:57:54.808661 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:57:54.808715 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:57:54.849207 1157708 cri.go:89] found id: ""
	I0318 13:57:54.849245 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.849256 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:57:54.849264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:57:54.849334 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:57:54.918479 1157708 cri.go:89] found id: ""
	I0318 13:57:54.918508 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.918520 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:57:54.918528 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:57:54.918597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:57:54.958828 1157708 cri.go:89] found id: ""
	I0318 13:57:54.958861 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.958871 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:57:54.958887 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:57:54.958906 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:57:55.078045 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:57:55.078092 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:57:55.123043 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:57:55.123077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:57:55.180480 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:57:55.180518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:57:55.197264 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:57:55.197316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:57:55.291264 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0318 13:57:55.291325 1157708 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 13:57:55.291395 1157708 out.go:239] * 
	W0318 13:57:55.291477 1157708 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.291502 1157708 out.go:239] * 
	W0318 13:57:55.292511 1157708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:55.295566 1157708 out.go:177] 
	W0318 13:57:55.296840 1157708 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.296903 1157708 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 13:57:55.296941 1157708 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 13:57:55.298417 1157708 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.496798475Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770820496765245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4f166d64-35e3-441f-b9eb-01e49986f9e1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.497518792Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d28b27b8-f062-40b9-a67c-a93ef37ea7de name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.497569317Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d28b27b8-f062-40b9-a67c-a93ef37ea7de name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.497609312Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d28b27b8-f062-40b9-a67c-a93ef37ea7de name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.535651713Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69ca6640-f0a7-436b-9fa8-a1d7436fbcb2 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.535722505Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69ca6640-f0a7-436b-9fa8-a1d7436fbcb2 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.537113449Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f2b886b3-5b82-439d-ab9c-29e768a09d4b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.537492656Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770820537472360,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2b886b3-5b82-439d-ab9c-29e768a09d4b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.538279788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d7d9cd15-54e1-4679-8341-8b86b54e85f7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.538334245Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d7d9cd15-54e1-4679-8341-8b86b54e85f7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.538370235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=d7d9cd15-54e1-4679-8341-8b86b54e85f7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.580360716Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=964281da-0f8f-47c8-a15f-05e05661a222 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.580453869Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=964281da-0f8f-47c8-a15f-05e05661a222 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.581722035Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=58addb5c-f595-4003-8b90-64ffb29f5cc2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.582254333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770820582223328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=58addb5c-f595-4003-8b90-64ffb29f5cc2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.582791036Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=282d8f3c-45f7-4d18-9e41-ba1d3e2e5b58 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.582842234Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=282d8f3c-45f7-4d18-9e41-ba1d3e2e5b58 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.582941350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=282d8f3c-45f7-4d18-9e41-ba1d3e2e5b58 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.620188502Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=bd9a5870-5f88-4e4d-a766-541aca58cd35 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.620285270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=bd9a5870-5f88-4e4d-a766-541aca58cd35 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.621523385Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=80873dfa-2eed-4cac-814b-5380c189fb5e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.621984327Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770820621951375,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=80873dfa-2eed-4cac-814b-5380c189fb5e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.622625642Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9b47b92-5841-466d-b129-f672d73efd69 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.622707044Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9b47b92-5841-466d-b129-f672d73efd69 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:07:00 old-k8s-version-909137 crio[647]: time="2024-03-18 14:07:00.622746192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a9b47b92-5841-466d-b129-f672d73efd69 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar18 13:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052261] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043383] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.666130] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.485262] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.465886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.163261] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.162544] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.204190] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.135186] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.316905] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.427040] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +0.071901] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.095612] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[Mar18 13:50] kauditd_printk_skb: 46 callbacks suppressed
	[Mar18 13:54] systemd-fstab-generator[4988]: Ignoring "noauto" option for root device
	[Mar18 13:55] systemd-fstab-generator[5270]: Ignoring "noauto" option for root device
	[  +0.062731] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:07:00 up 17 min,  0 users,  load average: 0.00, 0.04, 0.07
	Linux old-k8s-version-909137 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0002542a0, 0xc0001000c0)
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:218
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]: goroutine 153 [syscall]:
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]: syscall.Syscall6(0xe8, 0xc, 0xc0009d1b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xc, 0xc0009d1b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000c23e60, 0x7e4d00, 0xc0009ffbf0, 0x6e60000029b)
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc000c6f540)
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Mar 18 14:06:58 old-k8s-version-909137 kubelet[6454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Mar 18 14:06:58 old-k8s-version-909137 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 18 14:06:58 old-k8s-version-909137 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 18 14:06:59 old-k8s-version-909137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Mar 18 14:06:59 old-k8s-version-909137 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 18 14:06:59 old-k8s-version-909137 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 18 14:06:59 old-k8s-version-909137 kubelet[6481]: I0318 14:06:59.656194    6481 server.go:416] Version: v1.20.0
	Mar 18 14:06:59 old-k8s-version-909137 kubelet[6481]: I0318 14:06:59.656564    6481 server.go:837] Client rotation is on, will bootstrap in background
	Mar 18 14:06:59 old-k8s-version-909137 kubelet[6481]: I0318 14:06:59.660036    6481 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 18 14:06:59 old-k8s-version-909137 kubelet[6481]: I0318 14:06:59.662780    6481 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 18 14:06:59 old-k8s-version-909137 kubelet[6481]: W0318 14:06:59.662780    6481 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-909137 -n old-k8s-version-909137
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 2 (252.96694ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-909137" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (319.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-537236 -n no-preload-537236
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:09:22.458406985 +0000 UTC m=+6839.845321244
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-537236 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-537236 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.345µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-537236 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-537236 -n no-preload-537236
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-537236 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-537236 logs -n 25: (1.973185499s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-760389                                        | pause-760389                 | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:40 UTC |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-173866 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | disable-driver-mounts-173866                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:42 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-173036            | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-537236             | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC | 18 Mar 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-569210  | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC | 18 Mar 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-909137        | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-173036                 | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-537236                  | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-909137             | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-569210       | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:55 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	| start   | -p newest-cni-572909 --memory=2200 --alsologtostderr   | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:08:48
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:08:48.310358 1162655 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:08:48.310488 1162655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:08:48.310497 1162655 out.go:304] Setting ErrFile to fd 2...
	I0318 14:08:48.310501 1162655 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:08:48.310727 1162655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 14:08:48.311333 1162655 out.go:298] Setting JSON to false
	I0318 14:08:48.312401 1162655 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21075,"bootTime":1710749853,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:08:48.312469 1162655 start.go:139] virtualization: kvm guest
	I0318 14:08:48.315959 1162655 out.go:177] * [newest-cni-572909] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:08:48.317726 1162655 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 14:08:48.319577 1162655 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:08:48.317759 1162655 notify.go:220] Checking for updates...
	I0318 14:08:48.322829 1162655 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 14:08:48.324313 1162655 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 14:08:48.325672 1162655 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:08:48.327175 1162655 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:08:48.328922 1162655 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:08:48.329032 1162655 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:08:48.329135 1162655 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:08:48.329312 1162655 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:08:48.367594 1162655 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 14:08:48.369071 1162655 start.go:297] selected driver: kvm2
	I0318 14:08:48.369094 1162655 start.go:901] validating driver "kvm2" against <nil>
	I0318 14:08:48.369106 1162655 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:08:48.369814 1162655 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:08:48.369904 1162655 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:08:48.385295 1162655 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:08:48.385338 1162655 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0318 14:08:48.385363 1162655 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0318 14:08:48.385668 1162655 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 14:08:48.385743 1162655 cni.go:84] Creating CNI manager for ""
	I0318 14:08:48.385758 1162655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:08:48.385769 1162655 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 14:08:48.385815 1162655 start.go:340] cluster config:
	{Name:newest-cni-572909 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-572909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:08:48.385906 1162655 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:08:48.388306 1162655 out.go:177] * Starting "newest-cni-572909" primary control-plane node in "newest-cni-572909" cluster
	I0318 14:08:48.389649 1162655 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:08:48.389685 1162655 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 14:08:48.389697 1162655 cache.go:56] Caching tarball of preloaded images
	I0318 14:08:48.389781 1162655 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:08:48.389791 1162655 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0318 14:08:48.389874 1162655 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/config.json ...
	I0318 14:08:48.389890 1162655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/config.json: {Name:mk8a9377149cd37469cbe7b682000319801764e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:08:48.390042 1162655 start.go:360] acquireMachinesLock for newest-cni-572909: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:08:48.390071 1162655 start.go:364] duration metric: took 15.159µs to acquireMachinesLock for "newest-cni-572909"
	I0318 14:08:48.390088 1162655 start.go:93] Provisioning new machine with config: &{Name:newest-cni-572909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-572909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:08:48.390146 1162655 start.go:125] createHost starting for "" (driver="kvm2")
	I0318 14:08:48.391678 1162655 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0318 14:08:48.391860 1162655 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:08:48.391899 1162655 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:08:48.405627 1162655 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35677
	I0318 14:08:48.406091 1162655 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:08:48.406711 1162655 main.go:141] libmachine: Using API Version  1
	I0318 14:08:48.406730 1162655 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:08:48.407079 1162655 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:08:48.407295 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetMachineName
	I0318 14:08:48.407492 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:08:48.407668 1162655 start.go:159] libmachine.API.Create for "newest-cni-572909" (driver="kvm2")
	I0318 14:08:48.407698 1162655 client.go:168] LocalClient.Create starting
	I0318 14:08:48.407763 1162655 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem
	I0318 14:08:48.407801 1162655 main.go:141] libmachine: Decoding PEM data...
	I0318 14:08:48.407829 1162655 main.go:141] libmachine: Parsing certificate...
	I0318 14:08:48.407898 1162655 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem
	I0318 14:08:48.407930 1162655 main.go:141] libmachine: Decoding PEM data...
	I0318 14:08:48.407949 1162655 main.go:141] libmachine: Parsing certificate...
	I0318 14:08:48.407974 1162655 main.go:141] libmachine: Running pre-create checks...
	I0318 14:08:48.407998 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .PreCreateCheck
	I0318 14:08:48.408449 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetConfigRaw
	I0318 14:08:48.408913 1162655 main.go:141] libmachine: Creating machine...
	I0318 14:08:48.408931 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .Create
	I0318 14:08:48.409106 1162655 main.go:141] libmachine: (newest-cni-572909) Creating KVM machine...
	I0318 14:08:48.410400 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found existing default KVM network
	I0318 14:08:48.411769 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:48.411512 1162678 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4f:83:42} reservation:<nil>}
	I0318 14:08:48.412613 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:48.412531 1162678 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c4:37:b3} reservation:<nil>}
	I0318 14:08:48.413496 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:48.413398 1162678 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:c0:15:ba} reservation:<nil>}
	I0318 14:08:48.414607 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:48.414548 1162678 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002c7730}
	I0318 14:08:48.414702 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | created network xml: 
	I0318 14:08:48.414727 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | <network>
	I0318 14:08:48.414738 1162655 main.go:141] libmachine: (newest-cni-572909) DBG |   <name>mk-newest-cni-572909</name>
	I0318 14:08:48.414751 1162655 main.go:141] libmachine: (newest-cni-572909) DBG |   <dns enable='no'/>
	I0318 14:08:48.414775 1162655 main.go:141] libmachine: (newest-cni-572909) DBG |   
	I0318 14:08:48.414794 1162655 main.go:141] libmachine: (newest-cni-572909) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0318 14:08:48.414825 1162655 main.go:141] libmachine: (newest-cni-572909) DBG |     <dhcp>
	I0318 14:08:48.414838 1162655 main.go:141] libmachine: (newest-cni-572909) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0318 14:08:48.414847 1162655 main.go:141] libmachine: (newest-cni-572909) DBG |     </dhcp>
	I0318 14:08:48.414864 1162655 main.go:141] libmachine: (newest-cni-572909) DBG |   </ip>
	I0318 14:08:48.414873 1162655 main.go:141] libmachine: (newest-cni-572909) DBG |   
	I0318 14:08:48.414880 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | </network>
	I0318 14:08:48.414891 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | 
	I0318 14:08:48.419974 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | trying to create private KVM network mk-newest-cni-572909 192.168.72.0/24...
	I0318 14:08:48.491441 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | private KVM network mk-newest-cni-572909 192.168.72.0/24 created
	I0318 14:08:48.491493 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:48.491397 1162678 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 14:08:48.491507 1162655 main.go:141] libmachine: (newest-cni-572909) Setting up store path in /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909 ...
	I0318 14:08:48.491529 1162655 main.go:141] libmachine: (newest-cni-572909) Building disk image from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 14:08:48.491718 1162655 main.go:141] libmachine: (newest-cni-572909) Downloading /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 14:08:48.748379 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:48.748165 1162678 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa...
	I0318 14:08:48.844170 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:48.843988 1162678 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/newest-cni-572909.rawdisk...
	I0318 14:08:48.844215 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Writing magic tar header
	I0318 14:08:48.844232 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Writing SSH key tar header
	I0318 14:08:48.844255 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:48.844209 1162678 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909 ...
	I0318 14:08:48.844354 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909
	I0318 14:08:48.844449 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines
	I0318 14:08:48.844498 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 14:08:48.844523 1162655 main.go:141] libmachine: (newest-cni-572909) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909 (perms=drwx------)
	I0318 14:08:48.844542 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816
	I0318 14:08:48.844558 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 14:08:48.844574 1162655 main.go:141] libmachine: (newest-cni-572909) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines (perms=drwxr-xr-x)
	I0318 14:08:48.844589 1162655 main.go:141] libmachine: (newest-cni-572909) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube (perms=drwxr-xr-x)
	I0318 14:08:48.844604 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Checking permissions on dir: /home/jenkins
	I0318 14:08:48.844623 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Checking permissions on dir: /home
	I0318 14:08:48.844635 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Skipping /home - not owner
	I0318 14:08:48.844649 1162655 main.go:141] libmachine: (newest-cni-572909) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816 (perms=drwxrwxr-x)
	I0318 14:08:48.844666 1162655 main.go:141] libmachine: (newest-cni-572909) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 14:08:48.844680 1162655 main.go:141] libmachine: (newest-cni-572909) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 14:08:48.844697 1162655 main.go:141] libmachine: (newest-cni-572909) Creating domain...
	I0318 14:08:48.845764 1162655 main.go:141] libmachine: (newest-cni-572909) define libvirt domain using xml: 
	I0318 14:08:48.845791 1162655 main.go:141] libmachine: (newest-cni-572909) <domain type='kvm'>
	I0318 14:08:48.845803 1162655 main.go:141] libmachine: (newest-cni-572909)   <name>newest-cni-572909</name>
	I0318 14:08:48.845811 1162655 main.go:141] libmachine: (newest-cni-572909)   <memory unit='MiB'>2200</memory>
	I0318 14:08:48.845841 1162655 main.go:141] libmachine: (newest-cni-572909)   <vcpu>2</vcpu>
	I0318 14:08:48.845865 1162655 main.go:141] libmachine: (newest-cni-572909)   <features>
	I0318 14:08:48.845878 1162655 main.go:141] libmachine: (newest-cni-572909)     <acpi/>
	I0318 14:08:48.845890 1162655 main.go:141] libmachine: (newest-cni-572909)     <apic/>
	I0318 14:08:48.845897 1162655 main.go:141] libmachine: (newest-cni-572909)     <pae/>
	I0318 14:08:48.845904 1162655 main.go:141] libmachine: (newest-cni-572909)     
	I0318 14:08:48.845910 1162655 main.go:141] libmachine: (newest-cni-572909)   </features>
	I0318 14:08:48.845917 1162655 main.go:141] libmachine: (newest-cni-572909)   <cpu mode='host-passthrough'>
	I0318 14:08:48.845923 1162655 main.go:141] libmachine: (newest-cni-572909)   
	I0318 14:08:48.845933 1162655 main.go:141] libmachine: (newest-cni-572909)   </cpu>
	I0318 14:08:48.845938 1162655 main.go:141] libmachine: (newest-cni-572909)   <os>
	I0318 14:08:48.845957 1162655 main.go:141] libmachine: (newest-cni-572909)     <type>hvm</type>
	I0318 14:08:48.845979 1162655 main.go:141] libmachine: (newest-cni-572909)     <boot dev='cdrom'/>
	I0318 14:08:48.845989 1162655 main.go:141] libmachine: (newest-cni-572909)     <boot dev='hd'/>
	I0318 14:08:48.845996 1162655 main.go:141] libmachine: (newest-cni-572909)     <bootmenu enable='no'/>
	I0318 14:08:48.846006 1162655 main.go:141] libmachine: (newest-cni-572909)   </os>
	I0318 14:08:48.846011 1162655 main.go:141] libmachine: (newest-cni-572909)   <devices>
	I0318 14:08:48.846019 1162655 main.go:141] libmachine: (newest-cni-572909)     <disk type='file' device='cdrom'>
	I0318 14:08:48.846027 1162655 main.go:141] libmachine: (newest-cni-572909)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/boot2docker.iso'/>
	I0318 14:08:48.846048 1162655 main.go:141] libmachine: (newest-cni-572909)       <target dev='hdc' bus='scsi'/>
	I0318 14:08:48.846060 1162655 main.go:141] libmachine: (newest-cni-572909)       <readonly/>
	I0318 14:08:48.846080 1162655 main.go:141] libmachine: (newest-cni-572909)     </disk>
	I0318 14:08:48.846097 1162655 main.go:141] libmachine: (newest-cni-572909)     <disk type='file' device='disk'>
	I0318 14:08:48.846112 1162655 main.go:141] libmachine: (newest-cni-572909)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 14:08:48.846127 1162655 main.go:141] libmachine: (newest-cni-572909)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/newest-cni-572909.rawdisk'/>
	I0318 14:08:48.846140 1162655 main.go:141] libmachine: (newest-cni-572909)       <target dev='hda' bus='virtio'/>
	I0318 14:08:48.846155 1162655 main.go:141] libmachine: (newest-cni-572909)     </disk>
	I0318 14:08:48.846168 1162655 main.go:141] libmachine: (newest-cni-572909)     <interface type='network'>
	I0318 14:08:48.846177 1162655 main.go:141] libmachine: (newest-cni-572909)       <source network='mk-newest-cni-572909'/>
	I0318 14:08:48.846189 1162655 main.go:141] libmachine: (newest-cni-572909)       <model type='virtio'/>
	I0318 14:08:48.846199 1162655 main.go:141] libmachine: (newest-cni-572909)     </interface>
	I0318 14:08:48.846211 1162655 main.go:141] libmachine: (newest-cni-572909)     <interface type='network'>
	I0318 14:08:48.846227 1162655 main.go:141] libmachine: (newest-cni-572909)       <source network='default'/>
	I0318 14:08:48.846239 1162655 main.go:141] libmachine: (newest-cni-572909)       <model type='virtio'/>
	I0318 14:08:48.846249 1162655 main.go:141] libmachine: (newest-cni-572909)     </interface>
	I0318 14:08:48.846260 1162655 main.go:141] libmachine: (newest-cni-572909)     <serial type='pty'>
	I0318 14:08:48.846275 1162655 main.go:141] libmachine: (newest-cni-572909)       <target port='0'/>
	I0318 14:08:48.846287 1162655 main.go:141] libmachine: (newest-cni-572909)     </serial>
	I0318 14:08:48.846297 1162655 main.go:141] libmachine: (newest-cni-572909)     <console type='pty'>
	I0318 14:08:48.846309 1162655 main.go:141] libmachine: (newest-cni-572909)       <target type='serial' port='0'/>
	I0318 14:08:48.846321 1162655 main.go:141] libmachine: (newest-cni-572909)     </console>
	I0318 14:08:48.846335 1162655 main.go:141] libmachine: (newest-cni-572909)     <rng model='virtio'>
	I0318 14:08:48.846382 1162655 main.go:141] libmachine: (newest-cni-572909)       <backend model='random'>/dev/random</backend>
	I0318 14:08:48.846408 1162655 main.go:141] libmachine: (newest-cni-572909)     </rng>
	I0318 14:08:48.846422 1162655 main.go:141] libmachine: (newest-cni-572909)     
	I0318 14:08:48.846445 1162655 main.go:141] libmachine: (newest-cni-572909)     
	I0318 14:08:48.846458 1162655 main.go:141] libmachine: (newest-cni-572909)   </devices>
	I0318 14:08:48.846466 1162655 main.go:141] libmachine: (newest-cni-572909) </domain>
	I0318 14:08:48.846494 1162655 main.go:141] libmachine: (newest-cni-572909) 
	I0318 14:08:48.850915 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:5b:e1:73 in network default
	I0318 14:08:48.851555 1162655 main.go:141] libmachine: (newest-cni-572909) Ensuring networks are active...
	I0318 14:08:48.851588 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:48.852265 1162655 main.go:141] libmachine: (newest-cni-572909) Ensuring network default is active
	I0318 14:08:48.852610 1162655 main.go:141] libmachine: (newest-cni-572909) Ensuring network mk-newest-cni-572909 is active
	I0318 14:08:48.853188 1162655 main.go:141] libmachine: (newest-cni-572909) Getting domain xml...
	I0318 14:08:48.853868 1162655 main.go:141] libmachine: (newest-cni-572909) Creating domain...
	I0318 14:08:50.138624 1162655 main.go:141] libmachine: (newest-cni-572909) Waiting to get IP...
	I0318 14:08:50.139551 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:50.140037 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:08:50.140129 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:50.140023 1162678 retry.go:31] will retry after 243.451586ms: waiting for machine to come up
	I0318 14:08:50.385284 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:50.385792 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:08:50.385823 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:50.385743 1162678 retry.go:31] will retry after 302.114414ms: waiting for machine to come up
	I0318 14:08:50.689178 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:50.689686 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:08:50.689723 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:50.689638 1162678 retry.go:31] will retry after 368.291646ms: waiting for machine to come up
	I0318 14:08:51.059213 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:51.059686 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:08:51.059727 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:51.059625 1162678 retry.go:31] will retry after 523.146161ms: waiting for machine to come up
	I0318 14:08:51.584393 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:51.584897 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:08:51.584926 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:51.584846 1162678 retry.go:31] will retry after 624.057668ms: waiting for machine to come up
	I0318 14:08:52.210794 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:52.211298 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:08:52.211372 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:52.211280 1162678 retry.go:31] will retry after 815.679278ms: waiting for machine to come up
	I0318 14:08:53.028536 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:53.029082 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:08:53.029112 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:53.029009 1162678 retry.go:31] will retry after 919.713869ms: waiting for machine to come up
	I0318 14:08:53.950785 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:53.951267 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:08:53.951289 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:53.951227 1162678 retry.go:31] will retry after 1.431310974s: waiting for machine to come up
	I0318 14:08:55.384355 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:55.384823 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:08:55.384855 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:55.384769 1162678 retry.go:31] will retry after 1.530577252s: waiting for machine to come up
	I0318 14:08:56.917490 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:56.918087 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:08:56.918123 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:56.918031 1162678 retry.go:31] will retry after 2.048791423s: waiting for machine to come up
	I0318 14:08:58.968208 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:08:58.968816 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:08:58.968850 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:08:58.968760 1162678 retry.go:31] will retry after 1.798769156s: waiting for machine to come up
	I0318 14:09:00.769445 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:00.769945 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:09:00.769976 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:09:00.769897 1162678 retry.go:31] will retry after 2.378462701s: waiting for machine to come up
	I0318 14:09:03.149776 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:03.150327 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:09:03.150373 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:09:03.150282 1162678 retry.go:31] will retry after 3.861176023s: waiting for machine to come up
	I0318 14:09:07.015055 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:07.015536 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:09:07.015558 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:09:07.015496 1162678 retry.go:31] will retry after 4.510839092s: waiting for machine to come up
	I0318 14:09:11.529362 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:11.529974 1162655 main.go:141] libmachine: (newest-cni-572909) Found IP for machine: 192.168.72.13
	I0318 14:09:11.530002 1162655 main.go:141] libmachine: (newest-cni-572909) Reserving static IP address...
	I0318 14:09:11.530016 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has current primary IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:11.530423 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find host DHCP lease matching {name: "newest-cni-572909", mac: "52:54:00:a2:ca:ad", ip: "192.168.72.13"} in network mk-newest-cni-572909
	I0318 14:09:11.608211 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Getting to WaitForSSH function...
	I0318 14:09:11.608262 1162655 main.go:141] libmachine: (newest-cni-572909) Reserved static IP address: 192.168.72.13
	I0318 14:09:11.608277 1162655 main.go:141] libmachine: (newest-cni-572909) Waiting for SSH to be available...
	I0318 14:09:11.611207 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:11.611631 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:11.611656 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:11.611815 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Using SSH client type: external
	I0318 14:09:11.611836 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa (-rw-------)
	I0318 14:09:11.611864 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:09:11.611887 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | About to run SSH command:
	I0318 14:09:11.611898 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | exit 0
	I0318 14:09:11.745302 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | SSH cmd err, output: <nil>: 
	I0318 14:09:11.745539 1162655 main.go:141] libmachine: (newest-cni-572909) KVM machine creation complete!
	I0318 14:09:11.745907 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetConfigRaw
	I0318 14:09:11.746608 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:09:11.746844 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:09:11.747081 1162655 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0318 14:09:11.747102 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetState
	I0318 14:09:11.748357 1162655 main.go:141] libmachine: Detecting operating system of created instance...
	I0318 14:09:11.748393 1162655 main.go:141] libmachine: Waiting for SSH to be available...
	I0318 14:09:11.748401 1162655 main.go:141] libmachine: Getting to WaitForSSH function...
	I0318 14:09:11.748408 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:09:11.750742 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:11.751125 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:11.751158 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:11.751280 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:09:11.751498 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:11.751678 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:11.751840 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:09:11.752022 1162655 main.go:141] libmachine: Using SSH client type: native
	I0318 14:09:11.752303 1162655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0318 14:09:11.752317 1162655 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0318 14:09:11.868216 1162655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:09:11.868246 1162655 main.go:141] libmachine: Detecting the provisioner...
	I0318 14:09:11.868258 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:09:11.870889 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:11.871281 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:11.871311 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:11.871491 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:09:11.871714 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:11.871939 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:11.872065 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:09:11.872218 1162655 main.go:141] libmachine: Using SSH client type: native
	I0318 14:09:11.872459 1162655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0318 14:09:11.872476 1162655 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0318 14:09:11.989689 1162655 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0318 14:09:11.989833 1162655 main.go:141] libmachine: found compatible host: buildroot
	I0318 14:09:11.989847 1162655 main.go:141] libmachine: Provisioning with buildroot...
	I0318 14:09:11.989859 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetMachineName
	I0318 14:09:11.990133 1162655 buildroot.go:166] provisioning hostname "newest-cni-572909"
	I0318 14:09:11.990167 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetMachineName
	I0318 14:09:11.990398 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:09:11.993220 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:11.993694 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:11.993715 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:11.993944 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:09:11.994133 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:11.994329 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:11.994491 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:09:11.994647 1162655 main.go:141] libmachine: Using SSH client type: native
	I0318 14:09:11.994916 1162655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0318 14:09:11.994940 1162655 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-572909 && echo "newest-cni-572909" | sudo tee /etc/hostname
	I0318 14:09:12.133836 1162655 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-572909
	
	I0318 14:09:12.133927 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:09:12.136896 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:12.137204 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:12.137252 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:12.137352 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:09:12.137543 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:12.137727 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:12.137909 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:09:12.138108 1162655 main.go:141] libmachine: Using SSH client type: native
	I0318 14:09:12.138297 1162655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0318 14:09:12.138315 1162655 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-572909' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-572909/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-572909' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:09:12.268296 1162655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:09:12.268353 1162655 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 14:09:12.268382 1162655 buildroot.go:174] setting up certificates
	I0318 14:09:12.268398 1162655 provision.go:84] configureAuth start
	I0318 14:09:12.268414 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetMachineName
	I0318 14:09:12.268819 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetIP
	I0318 14:09:12.272146 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:12.272531 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:12.272565 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:12.272760 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:09:12.275087 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:12.275503 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:12.275537 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:12.275706 1162655 provision.go:143] copyHostCerts
	I0318 14:09:12.275783 1162655 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 14:09:12.275798 1162655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 14:09:12.275880 1162655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 14:09:12.276088 1162655 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 14:09:12.276106 1162655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 14:09:12.276151 1162655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 14:09:12.276241 1162655 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 14:09:12.276253 1162655 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 14:09:12.276286 1162655 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 14:09:12.276393 1162655 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.newest-cni-572909 san=[127.0.0.1 192.168.72.13 localhost minikube newest-cni-572909]
	I0318 14:09:12.662660 1162655 provision.go:177] copyRemoteCerts
	I0318 14:09:12.662748 1162655 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:09:12.662780 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:09:12.665684 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:12.666122 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:12.666153 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:12.666374 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:09:12.666606 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:12.666784 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:09:12.666978 1162655 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:09:12.760287 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:09:12.787516 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:09:12.814828 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 14:09:12.842802 1162655 provision.go:87] duration metric: took 574.390137ms to configureAuth
	I0318 14:09:12.842834 1162655 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:09:12.843040 1162655 config.go:182] Loaded profile config "newest-cni-572909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:09:12.843135 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:09:12.846051 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:12.846382 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:12.846414 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:12.846544 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:09:12.846773 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:12.846948 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:12.847152 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:09:12.847370 1162655 main.go:141] libmachine: Using SSH client type: native
	I0318 14:09:12.847564 1162655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0318 14:09:12.847580 1162655 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:09:13.149703 1162655 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:09:13.149743 1162655 main.go:141] libmachine: Checking connection to Docker...
	I0318 14:09:13.149753 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetURL
	I0318 14:09:13.151104 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | Using libvirt version 6000000
	I0318 14:09:13.153501 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.153808 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:13.153840 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.154033 1162655 main.go:141] libmachine: Docker is up and running!
	I0318 14:09:13.154056 1162655 main.go:141] libmachine: Reticulating splines...
	I0318 14:09:13.154064 1162655 client.go:171] duration metric: took 24.746357796s to LocalClient.Create
	I0318 14:09:13.154087 1162655 start.go:167] duration metric: took 24.746420242s to libmachine.API.Create "newest-cni-572909"
	I0318 14:09:13.154100 1162655 start.go:293] postStartSetup for "newest-cni-572909" (driver="kvm2")
	I0318 14:09:13.154117 1162655 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:09:13.154156 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:09:13.154422 1162655 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:09:13.154448 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:09:13.156703 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.157108 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:13.157139 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.157232 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:09:13.157410 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:13.157559 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:09:13.157729 1162655 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:09:13.248603 1162655 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:09:13.253817 1162655 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:09:13.253850 1162655 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 14:09:13.253966 1162655 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 14:09:13.254066 1162655 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 14:09:13.254200 1162655 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:09:13.265559 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 14:09:13.299349 1162655 start.go:296] duration metric: took 145.234607ms for postStartSetup
	I0318 14:09:13.299449 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetConfigRaw
	I0318 14:09:13.300189 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetIP
	I0318 14:09:13.302864 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.303233 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:13.303256 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.303591 1162655 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/config.json ...
	I0318 14:09:13.303752 1162655 start.go:128] duration metric: took 24.913593325s to createHost
	I0318 14:09:13.303776 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:09:13.306210 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.306568 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:13.306602 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.306719 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:09:13.306939 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:13.307120 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:13.307317 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:09:13.307501 1162655 main.go:141] libmachine: Using SSH client type: native
	I0318 14:09:13.307691 1162655 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0318 14:09:13.307703 1162655 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:09:13.429768 1162655 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710770953.399200776
	
	I0318 14:09:13.429796 1162655 fix.go:216] guest clock: 1710770953.399200776
	I0318 14:09:13.429805 1162655 fix.go:229] Guest: 2024-03-18 14:09:13.399200776 +0000 UTC Remote: 2024-03-18 14:09:13.303764685 +0000 UTC m=+25.043781149 (delta=95.436091ms)
	I0318 14:09:13.429825 1162655 fix.go:200] guest clock delta is within tolerance: 95.436091ms
	I0318 14:09:13.429830 1162655 start.go:83] releasing machines lock for "newest-cni-572909", held for 25.039749991s
	I0318 14:09:13.429849 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:09:13.430098 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetIP
	I0318 14:09:13.432850 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.433341 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:13.433369 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.433529 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:09:13.434110 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:09:13.434330 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:09:13.434430 1162655 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:09:13.434487 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:09:13.434590 1162655 ssh_runner.go:195] Run: cat /version.json
	I0318 14:09:13.434621 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:09:13.437099 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.437412 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.437532 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:13.437569 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.437736 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:09:13.437808 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:13.437832 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:13.437920 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:13.438008 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:09:13.438126 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:09:13.438192 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:09:13.438253 1162655 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:09:13.438400 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:09:13.438572 1162655 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:09:13.523100 1162655 ssh_runner.go:195] Run: systemctl --version
	I0318 14:09:13.558168 1162655 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:09:13.723894 1162655 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:09:13.731990 1162655 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:09:13.732072 1162655 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:09:13.749587 1162655 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:09:13.749622 1162655 start.go:494] detecting cgroup driver to use...
	I0318 14:09:13.749726 1162655 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:09:13.767664 1162655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:09:13.782336 1162655 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:09:13.782393 1162655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:09:13.798491 1162655 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:09:13.814535 1162655 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:09:13.940744 1162655 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:09:14.102302 1162655 docker.go:233] disabling docker service ...
	I0318 14:09:14.102387 1162655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:09:14.120127 1162655 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:09:14.136817 1162655 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:09:14.286900 1162655 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:09:14.432573 1162655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:09:14.449826 1162655 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:09:14.471279 1162655 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:09:14.471367 1162655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:09:14.484268 1162655 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:09:14.484362 1162655 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:09:14.497079 1162655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:09:14.509721 1162655 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:09:14.522770 1162655 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:09:14.536423 1162655 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:09:14.547579 1162655 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:09:14.547651 1162655 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:09:14.564232 1162655 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:09:14.576678 1162655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:09:14.727909 1162655 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:09:14.890404 1162655 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:09:14.890477 1162655 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:09:14.895941 1162655 start.go:562] Will wait 60s for crictl version
	I0318 14:09:14.896006 1162655 ssh_runner.go:195] Run: which crictl
	I0318 14:09:14.900380 1162655 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:09:14.950702 1162655 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:09:14.950800 1162655 ssh_runner.go:195] Run: crio --version
	I0318 14:09:14.982248 1162655 ssh_runner.go:195] Run: crio --version
	I0318 14:09:15.018073 1162655 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 14:09:15.019549 1162655 main.go:141] libmachine: (newest-cni-572909) Calling .GetIP
	I0318 14:09:15.022354 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:15.022788 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:09:04 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:09:15.022823 1162655 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:09:15.023048 1162655 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 14:09:15.027844 1162655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:09:15.044526 1162655 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0318 14:09:15.045925 1162655 kubeadm.go:877] updating cluster {Name:newest-cni-572909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-572909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:09:15.046071 1162655 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:09:15.046144 1162655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:09:15.085959 1162655 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 14:09:15.086030 1162655 ssh_runner.go:195] Run: which lz4
	I0318 14:09:15.090618 1162655 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:09:15.095602 1162655 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:09:15.095637 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0318 14:09:16.864857 1162655 crio.go:444] duration metric: took 1.774285314s to copy over tarball
	I0318 14:09:16.864963 1162655 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:09:19.476250 1162655 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.611228762s)
	I0318 14:09:19.476308 1162655 crio.go:451] duration metric: took 2.611413337s to extract the tarball
	I0318 14:09:19.476319 1162655 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:09:19.518878 1162655 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:09:19.570530 1162655 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:09:19.570562 1162655 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:09:19.570572 1162655 kubeadm.go:928] updating node { 192.168.72.13 8443 v1.29.0-rc.2 crio true true} ...
	I0318 14:09:19.570847 1162655 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-572909 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-572909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:09:19.571016 1162655 ssh_runner.go:195] Run: crio config
	I0318 14:09:19.622959 1162655 cni.go:84] Creating CNI manager for ""
	I0318 14:09:19.622994 1162655 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:09:19.623011 1162655 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0318 14:09:19.623044 1162655 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.13 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-572909 NodeName:newest-cni-572909 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.72.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:09:19.623286 1162655 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-572909"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:09:19.623373 1162655 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 14:09:19.635572 1162655 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:09:19.635631 1162655 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:09:19.647274 1162655 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0318 14:09:19.666267 1162655 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 14:09:19.684597 1162655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0318 14:09:19.702981 1162655 ssh_runner.go:195] Run: grep 192.168.72.13	control-plane.minikube.internal$ /etc/hosts
	I0318 14:09:19.707164 1162655 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:09:19.722304 1162655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:09:19.864470 1162655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:09:19.902409 1162655 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909 for IP: 192.168.72.13
	I0318 14:09:19.902433 1162655 certs.go:194] generating shared ca certs ...
	I0318 14:09:19.902449 1162655 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:19.902629 1162655 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 14:09:19.902691 1162655 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 14:09:19.902705 1162655 certs.go:256] generating profile certs ...
	I0318 14:09:19.902825 1162655 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/client.key
	I0318 14:09:19.902848 1162655 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/client.crt with IP's: []
	I0318 14:09:20.020051 1162655 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/client.crt ...
	I0318 14:09:20.020104 1162655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/client.crt: {Name:mk7240fc1097a1b460bf5a6558bd6e086428dc95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:20.020345 1162655 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/client.key ...
	I0318 14:09:20.020378 1162655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/client.key: {Name:mk93e2a7015388c6bf0384d8cfd5d77bda2649ac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:20.020503 1162655 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.key.3b943828
	I0318 14:09:20.020520 1162655 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.crt.3b943828 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.13]
	I0318 14:09:20.387855 1162655 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.crt.3b943828 ...
	I0318 14:09:20.387893 1162655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.crt.3b943828: {Name:mk75ac66bb05d7d66e86f1e43123ccaf9a15457a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:20.388059 1162655 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.key.3b943828 ...
	I0318 14:09:20.388093 1162655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.key.3b943828: {Name:mkbeeacc6aab35602af5ed8d18f11f654129268e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:20.388172 1162655 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.crt.3b943828 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.crt
	I0318 14:09:20.388249 1162655 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.key.3b943828 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.key
	I0318 14:09:20.388299 1162655 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/proxy-client.key
	I0318 14:09:20.388315 1162655 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/proxy-client.crt with IP's: []
	I0318 14:09:20.536854 1162655 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/proxy-client.crt ...
	I0318 14:09:20.536892 1162655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/proxy-client.crt: {Name:mk35edba167b04961a215f03569832162c2b4036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:20.537057 1162655 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/proxy-client.key ...
	I0318 14:09:20.537070 1162655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/proxy-client.key: {Name:mkb99db9617f358952a413534b78cd871dc991d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:20.537278 1162655 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 14:09:20.537331 1162655 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 14:09:20.537341 1162655 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 14:09:20.537365 1162655 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:09:20.537386 1162655 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:09:20.537411 1162655 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 14:09:20.537450 1162655 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 14:09:20.538068 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:09:20.566078 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:09:20.595092 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:09:20.622385 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:09:20.650309 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 14:09:20.707927 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:09:20.743521 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:09:20.836303 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:09:20.916027 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:09:20.945926 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 14:09:20.973083 1162655 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 14:09:21.000137 1162655 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:09:21.020848 1162655 ssh_runner.go:195] Run: openssl version
	I0318 14:09:21.027441 1162655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 14:09:21.040669 1162655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 14:09:21.046906 1162655 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 14:09:21.047000 1162655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 14:09:21.055719 1162655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:09:21.069193 1162655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:09:21.084055 1162655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:09:21.089472 1162655 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:09:21.089537 1162655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:09:21.097147 1162655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:09:21.112092 1162655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 14:09:21.125325 1162655 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 14:09:21.130810 1162655 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 14:09:21.130873 1162655 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 14:09:21.137557 1162655 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 14:09:21.151985 1162655 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:09:21.157450 1162655 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 14:09:21.157516 1162655 kubeadm.go:391] StartCluster: {Name:newest-cni-572909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-572909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:09:21.157630 1162655 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:09:21.157685 1162655 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:09:21.205378 1162655 cri.go:89] found id: ""
	I0318 14:09:21.205466 1162655 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 14:09:21.218766 1162655 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:09:21.232679 1162655 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:09:21.246852 1162655 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:09:21.246875 1162655 kubeadm.go:156] found existing configuration files:
	
	I0318 14:09:21.246929 1162655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:09:21.259163 1162655 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:09:21.259222 1162655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:09:21.271664 1162655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:09:21.283261 1162655 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:09:21.283326 1162655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:09:21.294970 1162655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:09:21.306080 1162655 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:09:21.306141 1162655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:09:21.317587 1162655 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:09:21.328985 1162655 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:09:21.329054 1162655 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:09:21.340356 1162655 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:09:21.486099 1162655 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 14:09:21.486154 1162655 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:09:21.681162 1162655 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:09:21.681341 1162655 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:09:21.681489 1162655 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:09:21.931073 1162655 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.204616108Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770963204565371,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8550019b-ccca-452d-893e-d8be35eece9c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.205371971Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fbc9d231-4356-4bf5-8f08-5a6f4d244877 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.205421869Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fbc9d231-4356-4bf5-8f08-5a6f4d244877 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.205624469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5eff0b76358ea55983537e218c15405bb9546598fa0378d56c1acb15c091de1,PodSandboxId:746ec33a96d0e612980c6b0b0d6c2df8b6e08e74d0a81969779164cd02a197fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770100524048224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f02049f6-a08f-45ac-b285-cbdbb260ab59,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6bc6d5,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad11334bb2cb49a5b236a15b70d540d4a984767f4fd3ce605df4892fea682805,PodSandboxId:9f64912cb81a81ba68bd1f0840ce94e2543f7d9a7c62637f43bf344aae880e8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099602070811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-grqdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce5620-c97b-4ecd-baba-c5fc840b8127,},Annotations:map[string]string{io.kubernetes.container.hash: 7ddacc4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:529ec1988da2e52b372b7a67970abd9c5eaebb01e10450040a5b25b82a337fc9,PodSandboxId:94429a766e396734e972bc6c3bf2803e6db513cf810a6812ee87be60fe28a4a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099435758608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bhh4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
6f9b9a-2f7e-46bc-9224-57dc077e444d,},Annotations:map[string]string{io.kubernetes.container.hash: 5c214c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dcf9e1b53ce425f8b6695e6e1c5691bae3875e174faf197d4825fd5b78e2f87,PodSandboxId:2f6154cca24c71b1163177715948c27ba0ac043b9f210321ede1fd2704422399,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710770099280398184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dd6fcfc-7510-418d-baab-a0ec364391c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8e11475c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62109d6bfecf0c909738de06ab89c1fb9737a2736f389ecd3c3d8718ef53df0,PodSandboxId:3f0397f06b979fe2fe0a0e28151f59b286486fdcfa25722148590b84ca493234,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770079753027895,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a025326bb6180b2a8f6c316293e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 86d13242,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9a0041888c410f1f430fe9a078ee274b6be04c226035bd824ee2ab3f6dbc4a,PodSandboxId:bfeb40b64e804c9d6669b610714b5178e4c190f9875d9348c4b16c727247dc12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770079723179522,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8632dce66779f857721d3ec20f67a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef410e9166d54805c6253404b57accb39b524473e6d16d7c0a84754e5cb7fa1,PodSandboxId:9892446bae63666a1037572820b7a57d008e4435cb26ce0777608b6ed81df88e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770079752133032,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c376014bcfa6838e65b773f219f3fb58,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a992a5bf3001600ca4f876d0e19f510d8fb11c6a79d095665f3c25866f3c882a,PodSandboxId:04c4dcff6c197e27be359861a79cb96f6f471c32cdaae1c93c7ddd5969fda6c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770079609134834,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f3699be8bde2669bbf0e03e1ab70872,},Annotations:map[string]string{io.kubernetes.container.hash: 18df3f70,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fbc9d231-4356-4bf5-8f08-5a6f4d244877 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.249554428Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1d9efcc7-d0db-4d1b-80df-ff9c55bd0746 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.250228234Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1d9efcc7-d0db-4d1b-80df-ff9c55bd0746 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.251975068Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44c99994-16f6-42c1-9a43-2e8ed010a336 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.252333711Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770963252307785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44c99994-16f6-42c1-9a43-2e8ed010a336 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.253060505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e00165dd-1014-4951-aab1-2609d5444ad0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.253188327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e00165dd-1014-4951-aab1-2609d5444ad0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.253511588Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5eff0b76358ea55983537e218c15405bb9546598fa0378d56c1acb15c091de1,PodSandboxId:746ec33a96d0e612980c6b0b0d6c2df8b6e08e74d0a81969779164cd02a197fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770100524048224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f02049f6-a08f-45ac-b285-cbdbb260ab59,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6bc6d5,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad11334bb2cb49a5b236a15b70d540d4a984767f4fd3ce605df4892fea682805,PodSandboxId:9f64912cb81a81ba68bd1f0840ce94e2543f7d9a7c62637f43bf344aae880e8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099602070811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-grqdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce5620-c97b-4ecd-baba-c5fc840b8127,},Annotations:map[string]string{io.kubernetes.container.hash: 7ddacc4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:529ec1988da2e52b372b7a67970abd9c5eaebb01e10450040a5b25b82a337fc9,PodSandboxId:94429a766e396734e972bc6c3bf2803e6db513cf810a6812ee87be60fe28a4a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099435758608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bhh4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
6f9b9a-2f7e-46bc-9224-57dc077e444d,},Annotations:map[string]string{io.kubernetes.container.hash: 5c214c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dcf9e1b53ce425f8b6695e6e1c5691bae3875e174faf197d4825fd5b78e2f87,PodSandboxId:2f6154cca24c71b1163177715948c27ba0ac043b9f210321ede1fd2704422399,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710770099280398184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dd6fcfc-7510-418d-baab-a0ec364391c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8e11475c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62109d6bfecf0c909738de06ab89c1fb9737a2736f389ecd3c3d8718ef53df0,PodSandboxId:3f0397f06b979fe2fe0a0e28151f59b286486fdcfa25722148590b84ca493234,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770079753027895,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a025326bb6180b2a8f6c316293e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 86d13242,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9a0041888c410f1f430fe9a078ee274b6be04c226035bd824ee2ab3f6dbc4a,PodSandboxId:bfeb40b64e804c9d6669b610714b5178e4c190f9875d9348c4b16c727247dc12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770079723179522,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8632dce66779f857721d3ec20f67a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef410e9166d54805c6253404b57accb39b524473e6d16d7c0a84754e5cb7fa1,PodSandboxId:9892446bae63666a1037572820b7a57d008e4435cb26ce0777608b6ed81df88e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770079752133032,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c376014bcfa6838e65b773f219f3fb58,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a992a5bf3001600ca4f876d0e19f510d8fb11c6a79d095665f3c25866f3c882a,PodSandboxId:04c4dcff6c197e27be359861a79cb96f6f471c32cdaae1c93c7ddd5969fda6c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770079609134834,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f3699be8bde2669bbf0e03e1ab70872,},Annotations:map[string]string{io.kubernetes.container.hash: 18df3f70,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e00165dd-1014-4951-aab1-2609d5444ad0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.298169609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b0f5feb-c4ca-4c83-b52a-5afe8978e980 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.298287820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b0f5feb-c4ca-4c83-b52a-5afe8978e980 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.300228962Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79956db8-1c52-49fd-aee4-05563c37ad1f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.300642060Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770963300611329,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79956db8-1c52-49fd-aee4-05563c37ad1f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.301744785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3f4315f-02f8-444c-b6e3-37e999c1b6ef name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.301902664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3f4315f-02f8-444c-b6e3-37e999c1b6ef name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.302194500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5eff0b76358ea55983537e218c15405bb9546598fa0378d56c1acb15c091de1,PodSandboxId:746ec33a96d0e612980c6b0b0d6c2df8b6e08e74d0a81969779164cd02a197fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770100524048224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f02049f6-a08f-45ac-b285-cbdbb260ab59,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6bc6d5,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad11334bb2cb49a5b236a15b70d540d4a984767f4fd3ce605df4892fea682805,PodSandboxId:9f64912cb81a81ba68bd1f0840ce94e2543f7d9a7c62637f43bf344aae880e8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099602070811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-grqdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce5620-c97b-4ecd-baba-c5fc840b8127,},Annotations:map[string]string{io.kubernetes.container.hash: 7ddacc4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:529ec1988da2e52b372b7a67970abd9c5eaebb01e10450040a5b25b82a337fc9,PodSandboxId:94429a766e396734e972bc6c3bf2803e6db513cf810a6812ee87be60fe28a4a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099435758608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bhh4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
6f9b9a-2f7e-46bc-9224-57dc077e444d,},Annotations:map[string]string{io.kubernetes.container.hash: 5c214c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dcf9e1b53ce425f8b6695e6e1c5691bae3875e174faf197d4825fd5b78e2f87,PodSandboxId:2f6154cca24c71b1163177715948c27ba0ac043b9f210321ede1fd2704422399,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710770099280398184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dd6fcfc-7510-418d-baab-a0ec364391c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8e11475c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62109d6bfecf0c909738de06ab89c1fb9737a2736f389ecd3c3d8718ef53df0,PodSandboxId:3f0397f06b979fe2fe0a0e28151f59b286486fdcfa25722148590b84ca493234,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770079753027895,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a025326bb6180b2a8f6c316293e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 86d13242,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9a0041888c410f1f430fe9a078ee274b6be04c226035bd824ee2ab3f6dbc4a,PodSandboxId:bfeb40b64e804c9d6669b610714b5178e4c190f9875d9348c4b16c727247dc12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770079723179522,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8632dce66779f857721d3ec20f67a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef410e9166d54805c6253404b57accb39b524473e6d16d7c0a84754e5cb7fa1,PodSandboxId:9892446bae63666a1037572820b7a57d008e4435cb26ce0777608b6ed81df88e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770079752133032,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c376014bcfa6838e65b773f219f3fb58,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a992a5bf3001600ca4f876d0e19f510d8fb11c6a79d095665f3c25866f3c882a,PodSandboxId:04c4dcff6c197e27be359861a79cb96f6f471c32cdaae1c93c7ddd5969fda6c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770079609134834,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f3699be8bde2669bbf0e03e1ab70872,},Annotations:map[string]string{io.kubernetes.container.hash: 18df3f70,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3f4315f-02f8-444c-b6e3-37e999c1b6ef name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.340255412Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a2be6611-b515-4949-9055-e35fc9f0d611 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.340382049Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a2be6611-b515-4949-9055-e35fc9f0d611 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.342215017Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c90d7c62-b9f9-4319-b67a-35135a3c6f60 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.343067173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770963342952950,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c90d7c62-b9f9-4319-b67a-35135a3c6f60 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.344232083Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=026308e4-c8d8-4667-827f-dcb930ba54aa name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.344420556Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=026308e4-c8d8-4667-827f-dcb930ba54aa name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:09:23 no-preload-537236 crio[701]: time="2024-03-18 14:09:23.344722101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a5eff0b76358ea55983537e218c15405bb9546598fa0378d56c1acb15c091de1,PodSandboxId:746ec33a96d0e612980c6b0b0d6c2df8b6e08e74d0a81969779164cd02a197fb,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770100524048224,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f02049f6-a08f-45ac-b285-cbdbb260ab59,},Annotations:map[string]string{io.kubernetes.container.hash: 4f6bc6d5,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ad11334bb2cb49a5b236a15b70d540d4a984767f4fd3ce605df4892fea682805,PodSandboxId:9f64912cb81a81ba68bd1f0840ce94e2543f7d9a7c62637f43bf344aae880e8c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099602070811,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-grqdt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f4ce5620-c97b-4ecd-baba-c5fc840b8127,},Annotations:map[string]string{io.kubernetes.container.hash: 7ddacc4d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:529ec1988da2e52b372b7a67970abd9c5eaebb01e10450040a5b25b82a337fc9,PodSandboxId:94429a766e396734e972bc6c3bf2803e6db513cf810a6812ee87be60fe28a4a4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710770099435758608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-bhh4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6d
6f9b9a-2f7e-46bc-9224-57dc077e444d,},Annotations:map[string]string{io.kubernetes.container.hash: 5c214c4b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8dcf9e1b53ce425f8b6695e6e1c5691bae3875e174faf197d4825fd5b78e2f87,PodSandboxId:2f6154cca24c71b1163177715948c27ba0ac043b9f210321ede1fd2704422399,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710770099280398184,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6c4c5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2dd6fcfc-7510-418d-baab-a0ec364391c1,},Annotations:map[string]string{io.kubernetes.container.hash: 8e11475c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f62109d6bfecf0c909738de06ab89c1fb9737a2736f389ecd3c3d8718ef53df0,PodSandboxId:3f0397f06b979fe2fe0a0e28151f59b286486fdcfa25722148590b84ca493234,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710770079753027895,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1a025326bb6180b2a8f6c316293e5ad,},Annotations:map[string]string{io.kubernetes.container.hash: 86d13242,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a9a0041888c410f1f430fe9a078ee274b6be04c226035bd824ee2ab3f6dbc4a,PodSandboxId:bfeb40b64e804c9d6669b610714b5178e4c190f9875d9348c4b16c727247dc12,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710770079723179522,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8632dce66779f857721d3ec20f67a3e4,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ef410e9166d54805c6253404b57accb39b524473e6d16d7c0a84754e5cb7fa1,PodSandboxId:9892446bae63666a1037572820b7a57d008e4435cb26ce0777608b6ed81df88e,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710770079752133032,Labels:map[string]string{io.kubernetes
.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c376014bcfa6838e65b773f219f3fb58,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a992a5bf3001600ca4f876d0e19f510d8fb11c6a79d095665f3c25866f3c882a,PodSandboxId:04c4dcff6c197e27be359861a79cb96f6f471c32cdaae1c93c7ddd5969fda6c9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710770079609134834,Labels:map[string]string{io.kubernetes.container.na
me: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-537236,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f3699be8bde2669bbf0e03e1ab70872,},Annotations:map[string]string{io.kubernetes.container.hash: 18df3f70,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=026308e4-c8d8-4667-827f-dcb930ba54aa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5eff0b76358e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   746ec33a96d0e       storage-provisioner
	ad11334bb2cb4       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   9f64912cb81a8       coredns-76f75df574-grqdt
	529ec1988da2e       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   94429a766e396       coredns-76f75df574-bhh4k
	8dcf9e1b53ce4       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   14 minutes ago      Running             kube-proxy                0                   2f6154cca24c7       kube-proxy-6c4c5
	f62109d6bfecf       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   3f0397f06b979       etcd-no-preload-537236
	9ef410e9166d5       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   14 minutes ago      Running             kube-scheduler            2                   9892446bae636       kube-scheduler-no-preload-537236
	3a9a0041888c4       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   14 minutes ago      Running             kube-controller-manager   2                   bfeb40b64e804       kube-controller-manager-no-preload-537236
	a992a5bf30016       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   14 minutes ago      Running             kube-apiserver            2                   04c4dcff6c197       kube-apiserver-no-preload-537236
	
	
	==> coredns [529ec1988da2e52b372b7a67970abd9c5eaebb01e10450040a5b25b82a337fc9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [ad11334bb2cb49a5b236a15b70d540d4a984767f4fd3ce605df4892fea682805] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-537236
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-537236
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=no-preload-537236
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_54_46_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:54:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-537236
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:09:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:05:20 +0000   Mon, 18 Mar 2024 13:54:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:05:20 +0000   Mon, 18 Mar 2024 13:54:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:05:20 +0000   Mon, 18 Mar 2024 13:54:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:05:20 +0000   Mon, 18 Mar 2024 13:54:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.7
	  Hostname:    no-preload-537236
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 f4498343c5af4a83bb2a71cf0a0e9028
	  System UUID:                f4498343-c5af-4a83-bb2a-71cf0a0e9028
	  Boot ID:                    8e4f04ef-176c-4622-aadb-07fd4c5f4b88
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-bhh4k                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-76f75df574-grqdt                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-537236                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-537236             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-537236    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-6c4c5                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-537236             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-tkq6h              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-537236 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-537236 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-537236 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m   kubelet          Node no-preload-537236 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node no-preload-537236 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-537236 event: Registered Node no-preload-537236 in Controller
	
	
	==> dmesg <==
	[  +0.044647] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.560689] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.501153] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.697883] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.165969] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.059729] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070451] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.229963] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.169335] systemd-fstab-generator[660]: Ignoring "noauto" option for root device
	[  +0.284378] systemd-fstab-generator[685]: Ignoring "noauto" option for root device
	[ +17.046013] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.061314] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.078048] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +5.674403] kauditd_printk_skb: 100 callbacks suppressed
	[  +7.272688] kauditd_printk_skb: 44 callbacks suppressed
	[Mar18 13:50] kauditd_printk_skb: 20 callbacks suppressed
	[Mar18 13:54] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.597632] systemd-fstab-generator[3859]: Ignoring "noauto" option for root device
	[  +4.595978] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.712006] systemd-fstab-generator[4180]: Ignoring "noauto" option for root device
	[ +12.534844] systemd-fstab-generator[4368]: Ignoring "noauto" option for root device
	[  +0.098053] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 13:56] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [f62109d6bfecf0c909738de06ab89c1fb9737a2736f389ecd3c3d8718ef53df0] <==
	{"level":"info","ts":"2024-03-18T13:54:40.373125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"bb39151d8411994b became leader at term 2"}
	{"level":"info","ts":"2024-03-18T13:54:40.37315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: bb39151d8411994b elected leader bb39151d8411994b at term 2"}
	{"level":"info","ts":"2024-03-18T13:54:40.377139Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"bb39151d8411994b","local-member-attributes":"{Name:no-preload-537236 ClientURLs:[https://192.168.39.7:2379]}","request-path":"/0/members/bb39151d8411994b/attributes","cluster-id":"3202df3d6e5aadcb","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-18T13:54:40.377358Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:54:40.382282Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:54:40.38273Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:54:40.385143Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:54:40.399146Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T13:54:40.38751Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.7:2379"}
	{"level":"info","ts":"2024-03-18T13:54:40.400579Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:54:40.400748Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3202df3d6e5aadcb","local-member-id":"bb39151d8411994b","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:54:40.408901Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:54:40.408984Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T14:04:40.544691Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2024-03-18T14:04:40.547133Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":677,"took":"2.021805ms","hash":1437427951}
	{"level":"info","ts":"2024-03-18T14:04:40.547204Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1437427951,"revision":677,"compact-revision":-1}
	{"level":"warn","ts":"2024-03-18T14:09:20.077473Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.653628ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T14:09:20.077809Z","caller":"traceutil/trace.go:171","msg":"trace[1747660600] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1146; }","duration":"113.105991ms","start":"2024-03-18T14:09:19.964661Z","end":"2024-03-18T14:09:20.077767Z","steps":["trace[1747660600] 'range keys from in-memory index tree'  (duration: 112.584381ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:09:21.333718Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"368.649203ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T14:09:21.334056Z","caller":"traceutil/trace.go:171","msg":"trace[368809754] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1147; }","duration":"368.990665ms","start":"2024-03-18T14:09:20.965045Z","end":"2024-03-18T14:09:21.334036Z","steps":["trace[368809754] 'range keys from in-memory index tree'  (duration: 368.57164ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:09:21.334145Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T14:09:20.965032Z","time spent":"369.086928ms","remote":"127.0.0.1:53746","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-03-18T14:09:22.059734Z","caller":"traceutil/trace.go:171","msg":"trace[1193850409] linearizableReadLoop","detail":"{readStateIndex:1338; appliedIndex:1337; }","duration":"190.683946ms","start":"2024-03-18T14:09:21.86903Z","end":"2024-03-18T14:09:22.059714Z","steps":["trace[1193850409] 'read index received'  (duration: 190.495601ms)","trace[1193850409] 'applied index is now lower than readState.Index'  (duration: 187.276µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T14:09:22.060082Z","caller":"traceutil/trace.go:171","msg":"trace[1450465960] transaction","detail":"{read_only:false; response_revision:1148; number_of_response:1; }","duration":"213.029117ms","start":"2024-03-18T14:09:21.847042Z","end":"2024-03-18T14:09:22.060071Z","steps":["trace[1450465960] 'process raft request'  (duration: 212.524012ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:09:22.060271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"191.230222ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T14:09:22.060323Z","caller":"traceutil/trace.go:171","msg":"trace[933127876] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1148; }","duration":"191.289951ms","start":"2024-03-18T14:09:21.869025Z","end":"2024-03-18T14:09:22.060315Z","steps":["trace[933127876] 'agreement among raft nodes before linearized reading'  (duration: 191.208228ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:09:24 up 20 min,  0 users,  load average: 0.16, 0.14, 0.17
	Linux no-preload-537236 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a992a5bf3001600ca4f876d0e19f510d8fb11c6a79d095665f3c25866f3c882a] <==
	I0318 14:02:43.411126       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:04:42.414366       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:04:42.414530       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0318 14:04:43.415330       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:04:43.415473       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:04:43.415504       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:04:43.415366       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:04:43.415621       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:04:43.417663       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:05:43.415997       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:05:43.416109       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:05:43.416142       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:05:43.418261       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:05:43.418446       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:05:43.418483       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:07:43.417096       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:07:43.417302       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:07:43.417326       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:07:43.419259       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:07:43.419463       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:07:43.419621       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3a9a0041888c410f1f430fe9a078ee274b6be04c226035bd824ee2ab3f6dbc4a] <==
	I0318 14:03:28.186245       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:03:57.688068       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:03:58.197272       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:04:27.695175       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:04:28.208239       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:04:57.702687       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:04:58.224903       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:05:27.708580       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:05:28.233049       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:05:57.716438       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:05:58.242161       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:06:08.187229       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="394.261µs"
	I0318 14:06:19.183474       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="162.872µs"
	E0318 14:06:27.722884       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:06:28.251463       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:06:57.729026       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:06:58.274726       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:07:27.737771       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:07:28.284205       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:07:57.744089       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:07:58.292970       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:08:27.751156       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:08:28.301120       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:08:57.758082       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:08:58.316376       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [8dcf9e1b53ce425f8b6695e6e1c5691bae3875e174faf197d4825fd5b78e2f87] <==
	I0318 13:54:59.911432       1 server_others.go:72] "Using iptables proxy"
	I0318 13:54:59.985929       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.7"]
	I0318 13:55:00.345241       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0318 13:55:00.347702       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:55:00.347951       1 server_others.go:168] "Using iptables Proxier"
	I0318 13:55:00.367657       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:55:00.368034       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0318 13:55:00.368079       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:55:00.371459       1 config.go:188] "Starting service config controller"
	I0318 13:55:00.371513       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:55:00.371534       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:55:00.371544       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:55:00.378108       1 config.go:315] "Starting node config controller"
	I0318 13:55:00.378154       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:55:00.473615       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:55:00.473687       1 shared_informer.go:318] Caches are synced for service config
	I0318 13:55:00.480012       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [9ef410e9166d54805c6253404b57accb39b524473e6d16d7c0a84754e5cb7fa1] <==
	W0318 13:54:42.454216       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:54:42.454225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:54:42.456083       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:54:42.456140       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:54:43.440813       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:54:43.440923       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:54:43.505809       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:54:43.505935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:54:43.528622       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 13:54:43.528680       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 13:54:43.593201       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:54:43.593496       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:54:43.636431       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:54:43.636523       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:54:43.647718       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 13:54:43.647744       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0318 13:54:43.663319       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 13:54:43.663449       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 13:54:43.669770       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:54:43.669819       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:54:43.711218       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0318 13:54:43.711275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0318 13:54:43.848258       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:54:43.848380       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:54:46.535186       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:06:46 no-preload-537236 kubelet[4187]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:06:46 no-preload-537236 kubelet[4187]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:06:46 no-preload-537236 kubelet[4187]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:06:46 no-preload-537236 kubelet[4187]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:07:01 no-preload-537236 kubelet[4187]: E0318 14:07:01.165649    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:07:15 no-preload-537236 kubelet[4187]: E0318 14:07:15.166628    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:07:30 no-preload-537236 kubelet[4187]: E0318 14:07:30.166079    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:07:43 no-preload-537236 kubelet[4187]: E0318 14:07:43.166505    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:07:46 no-preload-537236 kubelet[4187]: E0318 14:07:46.289283    4187 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:07:46 no-preload-537236 kubelet[4187]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:07:46 no-preload-537236 kubelet[4187]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:07:46 no-preload-537236 kubelet[4187]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:07:46 no-preload-537236 kubelet[4187]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:07:58 no-preload-537236 kubelet[4187]: E0318 14:07:58.166219    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:08:13 no-preload-537236 kubelet[4187]: E0318 14:08:13.166329    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:08:26 no-preload-537236 kubelet[4187]: E0318 14:08:26.166727    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:08:38 no-preload-537236 kubelet[4187]: E0318 14:08:38.166566    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:08:46 no-preload-537236 kubelet[4187]: E0318 14:08:46.291620    4187 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:08:46 no-preload-537236 kubelet[4187]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:08:46 no-preload-537236 kubelet[4187]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:08:46 no-preload-537236 kubelet[4187]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:08:46 no-preload-537236 kubelet[4187]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:08:51 no-preload-537236 kubelet[4187]: E0318 14:08:51.166441    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:09:04 no-preload-537236 kubelet[4187]: E0318 14:09:04.167055    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	Mar 18 14:09:15 no-preload-537236 kubelet[4187]: E0318 14:09:15.166994    4187 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tkq6h" podUID="14e262de-fd94-4888-96ab-75823109c8c2"
	
	
	==> storage-provisioner [a5eff0b76358ea55983537e218c15405bb9546598fa0378d56c1acb15c091de1] <==
	I0318 13:55:00.713713       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 13:55:00.749170       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 13:55:00.749334       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 13:55:00.763610       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 13:55:00.763811       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-537236_e4a483f6-7f49-4c60-9197-dc053405ab92!
	I0318 13:55:00.768088       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3e625363-73f0-495a-944d-aa5501d6c9cc", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-537236_e4a483f6-7f49-4c60-9197-dc053405ab92 became leader
	I0318 13:55:00.864158       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-537236_e4a483f6-7f49-4c60-9197-dc053405ab92!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-537236 -n no-preload-537236
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-537236 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-tkq6h
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-537236 describe pod metrics-server-57f55c9bc5-tkq6h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-537236 describe pod metrics-server-57f55c9bc5-tkq6h: exit status 1 (78.517591ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-tkq6h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-537236 describe pod metrics-server-57f55c9bc5-tkq6h: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (319.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (391.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:11:02.486444036 +0000 UTC m=+6939.873358294
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-569210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-569210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.645µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-569210 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-569210 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-569210 logs -n 25: (1.32535775s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable dashboard -p embed-certs-173036                 | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-537236                  | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-909137             | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-569210       | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:55 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	| start   | -p newest-cni-572909 --memory=2200 --alsologtostderr   | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:09 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 14:09 UTC | 18 Mar 24 14:09 UTC |
	| start   | -p auto-990886 --memory=3072                           | auto-990886                  | jenkins | v1.32.0 | 18 Mar 24 14:09 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-572909             | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:09 UTC | 18 Mar 24 14:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-572909                                   | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:09 UTC | 18 Mar 24 14:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-572909                  | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC | 18 Mar 24 14:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-572909 --memory=2200 --alsologtostderr   | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC | 18 Mar 24 14:10 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| image   | newest-cni-572909 image list                           | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC | 18 Mar 24 14:10 UTC |
	|         | --format=json                                          |                              |         |         |                     |                     |
	| pause   | -p newest-cni-572909                                   | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC | 18 Mar 24 14:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC | 18 Mar 24 14:10 UTC |
	| unpause | -p newest-cni-572909                                   | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC | 18 Mar 24 14:10 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| start   | -p kindnet-990886                                      | kindnet-990886               | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p newest-cni-572909                                   | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC | 18 Mar 24 14:10 UTC |
	| delete  | -p newest-cni-572909                                   | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC | 18 Mar 24 14:10 UTC |
	| start   | -p calico-990886 --memory=3072                         | calico-990886                | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                             |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:10:45
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:10:45.866087 1164566 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:10:45.866374 1164566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:10:45.866389 1164566 out.go:304] Setting ErrFile to fd 2...
	I0318 14:10:45.866396 1164566 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:10:45.866622 1164566 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 14:10:45.867290 1164566 out.go:298] Setting JSON to false
	I0318 14:10:45.868637 1164566 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21193,"bootTime":1710749853,"procs":243,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:10:45.868718 1164566 start.go:139] virtualization: kvm guest
	I0318 14:10:45.870785 1164566 out.go:177] * [calico-990886] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:10:45.872223 1164566 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 14:10:45.873529 1164566 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:10:45.872250 1164566 notify.go:220] Checking for updates...
	I0318 14:10:45.876041 1164566 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 14:10:45.877365 1164566 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 14:10:45.878629 1164566 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:10:45.879891 1164566 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:10:45.881609 1164566 config.go:182] Loaded profile config "auto-990886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:10:45.881720 1164566 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:10:45.881799 1164566 config.go:182] Loaded profile config "kindnet-990886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:10:45.881918 1164566 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:10:45.921471 1164566 out.go:177] * Using the kvm2 driver based on user configuration
	I0318 14:10:45.922823 1164566 start.go:297] selected driver: kvm2
	I0318 14:10:45.922850 1164566 start.go:901] validating driver "kvm2" against <nil>
	I0318 14:10:45.922872 1164566 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:10:45.923599 1164566 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:10:45.923709 1164566 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:10:45.939743 1164566 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:10:45.939819 1164566 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 14:10:45.940052 1164566 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 14:10:45.940141 1164566 cni.go:84] Creating CNI manager for "calico"
	I0318 14:10:45.940157 1164566 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0318 14:10:45.940231 1164566 start.go:340] cluster config:
	{Name:calico-990886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:calico-990886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:10:45.940352 1164566 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:10:45.942077 1164566 out.go:177] * Starting "calico-990886" primary control-plane node in "calico-990886" cluster
	I0318 14:10:43.217137 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	I0318 14:10:45.708771 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	I0318 14:10:43.165682 1164267 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0318 14:10:43.165853 1164267 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:43.165897 1164267 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:43.182703 1164267 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39887
	I0318 14:10:43.183253 1164267 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:43.183967 1164267 main.go:141] libmachine: Using API Version  1
	I0318 14:10:43.183991 1164267 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:43.184393 1164267 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:43.184645 1164267 main.go:141] libmachine: (kindnet-990886) Calling .GetMachineName
	I0318 14:10:43.184881 1164267 main.go:141] libmachine: (kindnet-990886) Calling .DriverName
	I0318 14:10:43.185062 1164267 start.go:159] libmachine.API.Create for "kindnet-990886" (driver="kvm2")
	I0318 14:10:43.185105 1164267 client.go:168] LocalClient.Create starting
	I0318 14:10:43.185144 1164267 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem
	I0318 14:10:43.185175 1164267 main.go:141] libmachine: Decoding PEM data...
	I0318 14:10:43.185201 1164267 main.go:141] libmachine: Parsing certificate...
	I0318 14:10:43.185286 1164267 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem
	I0318 14:10:43.185314 1164267 main.go:141] libmachine: Decoding PEM data...
	I0318 14:10:43.185330 1164267 main.go:141] libmachine: Parsing certificate...
	I0318 14:10:43.185355 1164267 main.go:141] libmachine: Running pre-create checks...
	I0318 14:10:43.185366 1164267 main.go:141] libmachine: (kindnet-990886) Calling .PreCreateCheck
	I0318 14:10:43.185789 1164267 main.go:141] libmachine: (kindnet-990886) Calling .GetConfigRaw
	I0318 14:10:43.186177 1164267 main.go:141] libmachine: Creating machine...
	I0318 14:10:43.186192 1164267 main.go:141] libmachine: (kindnet-990886) Calling .Create
	I0318 14:10:43.186344 1164267 main.go:141] libmachine: (kindnet-990886) Creating KVM machine...
	I0318 14:10:43.187624 1164267 main.go:141] libmachine: (kindnet-990886) DBG | found existing default KVM network
	I0318 14:10:43.189097 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:43.188930 1164306 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:4f:4f:a1} reservation:<nil>}
	I0318 14:10:43.190336 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:43.190260 1164306 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000156a0}
	I0318 14:10:43.190384 1164267 main.go:141] libmachine: (kindnet-990886) DBG | created network xml: 
	I0318 14:10:43.190409 1164267 main.go:141] libmachine: (kindnet-990886) DBG | <network>
	I0318 14:10:43.190423 1164267 main.go:141] libmachine: (kindnet-990886) DBG |   <name>mk-kindnet-990886</name>
	I0318 14:10:43.190433 1164267 main.go:141] libmachine: (kindnet-990886) DBG |   <dns enable='no'/>
	I0318 14:10:43.190442 1164267 main.go:141] libmachine: (kindnet-990886) DBG |   
	I0318 14:10:43.190454 1164267 main.go:141] libmachine: (kindnet-990886) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0318 14:10:43.190465 1164267 main.go:141] libmachine: (kindnet-990886) DBG |     <dhcp>
	I0318 14:10:43.190477 1164267 main.go:141] libmachine: (kindnet-990886) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0318 14:10:43.190494 1164267 main.go:141] libmachine: (kindnet-990886) DBG |     </dhcp>
	I0318 14:10:43.190514 1164267 main.go:141] libmachine: (kindnet-990886) DBG |   </ip>
	I0318 14:10:43.190522 1164267 main.go:141] libmachine: (kindnet-990886) DBG |   
	I0318 14:10:43.190532 1164267 main.go:141] libmachine: (kindnet-990886) DBG | </network>
	I0318 14:10:43.190541 1164267 main.go:141] libmachine: (kindnet-990886) DBG | 
	I0318 14:10:43.196419 1164267 main.go:141] libmachine: (kindnet-990886) DBG | trying to create private KVM network mk-kindnet-990886 192.168.50.0/24...
	I0318 14:10:43.280884 1164267 main.go:141] libmachine: (kindnet-990886) DBG | private KVM network mk-kindnet-990886 192.168.50.0/24 created
	I0318 14:10:43.281040 1164267 main.go:141] libmachine: (kindnet-990886) Setting up store path in /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kindnet-990886 ...
	I0318 14:10:43.281109 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:43.281006 1164306 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 14:10:43.281129 1164267 main.go:141] libmachine: (kindnet-990886) Building disk image from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 14:10:43.281182 1164267 main.go:141] libmachine: (kindnet-990886) Downloading /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0318 14:10:43.544743 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:43.544582 1164306 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kindnet-990886/id_rsa...
	I0318 14:10:43.667175 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:43.667036 1164306 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kindnet-990886/kindnet-990886.rawdisk...
	I0318 14:10:43.667217 1164267 main.go:141] libmachine: (kindnet-990886) DBG | Writing magic tar header
	I0318 14:10:43.667240 1164267 main.go:141] libmachine: (kindnet-990886) DBG | Writing SSH key tar header
	I0318 14:10:43.667254 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:43.667196 1164306 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kindnet-990886 ...
	I0318 14:10:43.667369 1164267 main.go:141] libmachine: (kindnet-990886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kindnet-990886
	I0318 14:10:43.667400 1164267 main.go:141] libmachine: (kindnet-990886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines
	I0318 14:10:43.667411 1164267 main.go:141] libmachine: (kindnet-990886) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kindnet-990886 (perms=drwx------)
	I0318 14:10:43.667426 1164267 main.go:141] libmachine: (kindnet-990886) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube/machines (perms=drwxr-xr-x)
	I0318 14:10:43.667436 1164267 main.go:141] libmachine: (kindnet-990886) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816/.minikube (perms=drwxr-xr-x)
	I0318 14:10:43.667448 1164267 main.go:141] libmachine: (kindnet-990886) Setting executable bit set on /home/jenkins/minikube-integration/18429-1106816 (perms=drwxrwxr-x)
	I0318 14:10:43.667462 1164267 main.go:141] libmachine: (kindnet-990886) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0318 14:10:43.667472 1164267 main.go:141] libmachine: (kindnet-990886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 14:10:43.667487 1164267 main.go:141] libmachine: (kindnet-990886) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0318 14:10:43.667508 1164267 main.go:141] libmachine: (kindnet-990886) Creating domain...
	I0318 14:10:43.667521 1164267 main.go:141] libmachine: (kindnet-990886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18429-1106816
	I0318 14:10:43.667534 1164267 main.go:141] libmachine: (kindnet-990886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0318 14:10:43.667540 1164267 main.go:141] libmachine: (kindnet-990886) DBG | Checking permissions on dir: /home/jenkins
	I0318 14:10:43.667599 1164267 main.go:141] libmachine: (kindnet-990886) DBG | Checking permissions on dir: /home
	I0318 14:10:43.667633 1164267 main.go:141] libmachine: (kindnet-990886) DBG | Skipping /home - not owner
	I0318 14:10:43.668748 1164267 main.go:141] libmachine: (kindnet-990886) define libvirt domain using xml: 
	I0318 14:10:43.668775 1164267 main.go:141] libmachine: (kindnet-990886) <domain type='kvm'>
	I0318 14:10:43.668813 1164267 main.go:141] libmachine: (kindnet-990886)   <name>kindnet-990886</name>
	I0318 14:10:43.668838 1164267 main.go:141] libmachine: (kindnet-990886)   <memory unit='MiB'>3072</memory>
	I0318 14:10:43.668851 1164267 main.go:141] libmachine: (kindnet-990886)   <vcpu>2</vcpu>
	I0318 14:10:43.668859 1164267 main.go:141] libmachine: (kindnet-990886)   <features>
	I0318 14:10:43.668867 1164267 main.go:141] libmachine: (kindnet-990886)     <acpi/>
	I0318 14:10:43.668874 1164267 main.go:141] libmachine: (kindnet-990886)     <apic/>
	I0318 14:10:43.668887 1164267 main.go:141] libmachine: (kindnet-990886)     <pae/>
	I0318 14:10:43.668900 1164267 main.go:141] libmachine: (kindnet-990886)     
	I0318 14:10:43.668938 1164267 main.go:141] libmachine: (kindnet-990886)   </features>
	I0318 14:10:43.668955 1164267 main.go:141] libmachine: (kindnet-990886)   <cpu mode='host-passthrough'>
	I0318 14:10:43.668964 1164267 main.go:141] libmachine: (kindnet-990886)   
	I0318 14:10:43.668974 1164267 main.go:141] libmachine: (kindnet-990886)   </cpu>
	I0318 14:10:43.668982 1164267 main.go:141] libmachine: (kindnet-990886)   <os>
	I0318 14:10:43.668991 1164267 main.go:141] libmachine: (kindnet-990886)     <type>hvm</type>
	I0318 14:10:43.669000 1164267 main.go:141] libmachine: (kindnet-990886)     <boot dev='cdrom'/>
	I0318 14:10:43.669007 1164267 main.go:141] libmachine: (kindnet-990886)     <boot dev='hd'/>
	I0318 14:10:43.669017 1164267 main.go:141] libmachine: (kindnet-990886)     <bootmenu enable='no'/>
	I0318 14:10:43.669023 1164267 main.go:141] libmachine: (kindnet-990886)   </os>
	I0318 14:10:43.669035 1164267 main.go:141] libmachine: (kindnet-990886)   <devices>
	I0318 14:10:43.669043 1164267 main.go:141] libmachine: (kindnet-990886)     <disk type='file' device='cdrom'>
	I0318 14:10:43.669060 1164267 main.go:141] libmachine: (kindnet-990886)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kindnet-990886/boot2docker.iso'/>
	I0318 14:10:43.669071 1164267 main.go:141] libmachine: (kindnet-990886)       <target dev='hdc' bus='scsi'/>
	I0318 14:10:43.669084 1164267 main.go:141] libmachine: (kindnet-990886)       <readonly/>
	I0318 14:10:43.669094 1164267 main.go:141] libmachine: (kindnet-990886)     </disk>
	I0318 14:10:43.669107 1164267 main.go:141] libmachine: (kindnet-990886)     <disk type='file' device='disk'>
	I0318 14:10:43.669132 1164267 main.go:141] libmachine: (kindnet-990886)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0318 14:10:43.669150 1164267 main.go:141] libmachine: (kindnet-990886)       <source file='/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/kindnet-990886/kindnet-990886.rawdisk'/>
	I0318 14:10:43.669162 1164267 main.go:141] libmachine: (kindnet-990886)       <target dev='hda' bus='virtio'/>
	I0318 14:10:43.669172 1164267 main.go:141] libmachine: (kindnet-990886)     </disk>
	I0318 14:10:43.669182 1164267 main.go:141] libmachine: (kindnet-990886)     <interface type='network'>
	I0318 14:10:43.669194 1164267 main.go:141] libmachine: (kindnet-990886)       <source network='mk-kindnet-990886'/>
	I0318 14:10:43.669208 1164267 main.go:141] libmachine: (kindnet-990886)       <model type='virtio'/>
	I0318 14:10:43.669221 1164267 main.go:141] libmachine: (kindnet-990886)     </interface>
	I0318 14:10:43.669229 1164267 main.go:141] libmachine: (kindnet-990886)     <interface type='network'>
	I0318 14:10:43.669241 1164267 main.go:141] libmachine: (kindnet-990886)       <source network='default'/>
	I0318 14:10:43.669252 1164267 main.go:141] libmachine: (kindnet-990886)       <model type='virtio'/>
	I0318 14:10:43.669261 1164267 main.go:141] libmachine: (kindnet-990886)     </interface>
	I0318 14:10:43.669271 1164267 main.go:141] libmachine: (kindnet-990886)     <serial type='pty'>
	I0318 14:10:43.669280 1164267 main.go:141] libmachine: (kindnet-990886)       <target port='0'/>
	I0318 14:10:43.669287 1164267 main.go:141] libmachine: (kindnet-990886)     </serial>
	I0318 14:10:43.669296 1164267 main.go:141] libmachine: (kindnet-990886)     <console type='pty'>
	I0318 14:10:43.669307 1164267 main.go:141] libmachine: (kindnet-990886)       <target type='serial' port='0'/>
	I0318 14:10:43.669319 1164267 main.go:141] libmachine: (kindnet-990886)     </console>
	I0318 14:10:43.669330 1164267 main.go:141] libmachine: (kindnet-990886)     <rng model='virtio'>
	I0318 14:10:43.669339 1164267 main.go:141] libmachine: (kindnet-990886)       <backend model='random'>/dev/random</backend>
	I0318 14:10:43.669348 1164267 main.go:141] libmachine: (kindnet-990886)     </rng>
	I0318 14:10:43.669356 1164267 main.go:141] libmachine: (kindnet-990886)     
	I0318 14:10:43.669366 1164267 main.go:141] libmachine: (kindnet-990886)     
	I0318 14:10:43.669372 1164267 main.go:141] libmachine: (kindnet-990886)   </devices>
	I0318 14:10:43.669381 1164267 main.go:141] libmachine: (kindnet-990886) </domain>
	I0318 14:10:43.669398 1164267 main.go:141] libmachine: (kindnet-990886) 
	I0318 14:10:43.674423 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:68:39:46 in network default
	I0318 14:10:43.675064 1164267 main.go:141] libmachine: (kindnet-990886) Ensuring networks are active...
	I0318 14:10:43.675093 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:43.675858 1164267 main.go:141] libmachine: (kindnet-990886) Ensuring network default is active
	I0318 14:10:43.676299 1164267 main.go:141] libmachine: (kindnet-990886) Ensuring network mk-kindnet-990886 is active
	I0318 14:10:43.676930 1164267 main.go:141] libmachine: (kindnet-990886) Getting domain xml...
	I0318 14:10:43.677696 1164267 main.go:141] libmachine: (kindnet-990886) Creating domain...
	I0318 14:10:45.517579 1164267 main.go:141] libmachine: (kindnet-990886) Waiting to get IP...
	I0318 14:10:45.518517 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:45.518952 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:45.519039 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:45.518955 1164306 retry.go:31] will retry after 244.93542ms: waiting for machine to come up
	I0318 14:10:45.765367 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:45.765978 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:45.766017 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:45.765906 1164306 retry.go:31] will retry after 326.93426ms: waiting for machine to come up
	I0318 14:10:46.094494 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:46.094938 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:46.094976 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:46.094901 1164306 retry.go:31] will retry after 436.737408ms: waiting for machine to come up
	I0318 14:10:46.533529 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:46.534013 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:46.534042 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:46.533958 1164306 retry.go:31] will retry after 443.886348ms: waiting for machine to come up
	I0318 14:10:46.979798 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:46.980314 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:46.980356 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:46.980274 1164306 retry.go:31] will retry after 502.755662ms: waiting for machine to come up
	I0318 14:10:47.485068 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:47.485608 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:47.485633 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:47.485562 1164306 retry.go:31] will retry after 822.320168ms: waiting for machine to come up
	I0318 14:10:45.943282 1164566 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 14:10:45.943335 1164566 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 14:10:45.943347 1164566 cache.go:56] Caching tarball of preloaded images
	I0318 14:10:45.943466 1164566 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:10:45.943485 1164566 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 14:10:45.943609 1164566 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/calico-990886/config.json ...
	I0318 14:10:45.943637 1164566 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/calico-990886/config.json: {Name:mk117cd66adf4908fbf6ae973f838cc6fa8c35c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:10:45.943830 1164566 start.go:360] acquireMachinesLock for calico-990886: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:10:48.208194 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	I0318 14:10:50.209359 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	I0318 14:10:48.309015 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:48.309484 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:48.309510 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:48.309435 1164306 retry.go:31] will retry after 1.116809313s: waiting for machine to come up
	I0318 14:10:49.427770 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:49.428228 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:49.428259 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:49.428182 1164306 retry.go:31] will retry after 955.101216ms: waiting for machine to come up
	I0318 14:10:50.385405 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:50.385837 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:50.385871 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:50.385782 1164306 retry.go:31] will retry after 1.751629816s: waiting for machine to come up
	I0318 14:10:52.139656 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:52.140067 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:52.140093 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:52.140030 1164306 retry.go:31] will retry after 2.2446751s: waiting for machine to come up
	I0318 14:10:52.212124 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	I0318 14:10:54.709526 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	I0318 14:10:54.386399 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:54.386872 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:54.386971 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:54.386841 1164306 retry.go:31] will retry after 2.666890081s: waiting for machine to come up
	I0318 14:10:57.056855 1164267 main.go:141] libmachine: (kindnet-990886) DBG | domain kindnet-990886 has defined MAC address 52:54:00:dc:fe:a4 in network mk-kindnet-990886
	I0318 14:10:57.057271 1164267 main.go:141] libmachine: (kindnet-990886) DBG | unable to find current IP address of domain kindnet-990886 in network mk-kindnet-990886
	I0318 14:10:57.057309 1164267 main.go:141] libmachine: (kindnet-990886) DBG | I0318 14:10:57.057224 1164306 retry.go:31] will retry after 2.913423302s: waiting for machine to come up
	I0318 14:10:57.208239 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	I0318 14:10:59.707666 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	
	
	==> CRI-O <==
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.173809876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=233bde13-973f-41b9-b4db-1cec261ce19b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.173984125Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dcf47324a868c557381ea72f8c3bc0dce56b1bab36def329bfcff91a5c25df6,PodSandboxId:887781373b9c6a80d1f5dab89fb5c714863ed9729ad1d4cccb48ca6e4237da58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770128202047530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0dfdeb1-f567-41df-98c3-7987f0fd7b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 909a6a7e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478da5c49960ae3d9ce8ceebfef95d983383433a511bacf4b880e14255fcf23c,PodSandboxId:3f372f18c0800c7cf582878db05ab3229c1abda392a8445ba0b71cd3bb79ea06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710770126180946114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pp8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912b3f56-3df6-485f-a01a-60801b867b86,},Annotations:map[string]string{io.kubernetes.container.hash: dc0ca493,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bd4c6331e90782afb37b345b8e00d754b69c3b3d838b6da08111b9ea14a5cc,PodSandboxId:6083b00f89dc2e3e8d73bc820422bb6be8042b49e1eb358b9c90a8b70469a590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126301795954,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xdcht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf264558-6c11-44c9-82d6-ea23aea43dc9,},Annotations:map[string]string{io.kubernetes.container.hash: b0cf4d2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf55fa946a658cdc020920c0be13e1ce323a9ab0292a9b403fd7059000c70e1,PodSandboxId:f032f63a719f8348105bb201a8b835af4542fe3e8587eb3012a775367c461378,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126116394809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5qxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 164d2cc3-0891-4fcd-81bd-
34d7cf0c691c,},Annotations:map[string]string{io.kubernetes.container.hash: f164053c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d2d764cf683c43937ed6f1a2a4bc29ed977ee4e55873ded3c1a90fc325a68e,PodSandboxId:0c4384bffb72e76b865b7d57a32f42eaa40e53c876b3b4f3532a009ffcde0ae6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171077010655550461
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaeb8888551fdf1fa66251dad57f99eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f3c8b45c0b31e1a68ecd382756153908da62307b99de3ac00e94a9f0592120,PodSandboxId:b5b2d5706af19ec3b6793f4101d3c0ce85e939385bf55146c9e55fe8c32b97ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107701064
71259756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be94838dec3ae56e7ccef51c225c25dd,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d4752fdf387dcccc84aca83aec73463459220ac32fdb3510bd94ae21684775,PodSandboxId:ac791bedc626582dbf0e787f2f5b5fbf9626704820c08067bf84d08856c3f972,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,Creat
edAt:1710770106475618604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef18e5c2f20506f583d8e1ef75e4966,},Annotations:map[string]string{io.kubernetes.container.hash: b0cc1ab0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c396e8dd7d523dfd35c94244a7251f38ed0cad19ac6899846665673c43865d80,PodSandboxId:399f3c1da2a2e151138217c49ae862113fbc32c2bdeeb0d4afd579c2aee17257,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770
106456697248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7b4629155c46ec82f88394148a4486,},Annotations:map[string]string{io.kubernetes.container.hash: 50fc8f6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=233bde13-973f-41b9-b4db-1cec261ce19b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.219304127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7a5d2859-4b5d-4baa-b1db-5f4b2df28cbf name=/runtime.v1.RuntimeService/Version
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.219381808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7a5d2859-4b5d-4baa-b1db-5f4b2df28cbf name=/runtime.v1.RuntimeService/Version
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.220581254Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=79500322-2c3a-4a75-9c64-f68e7ef7840f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.220993914Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710771063220969611,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=79500322-2c3a-4a75-9c64-f68e7ef7840f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.221669807Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=64c5a8f8-5793-4213-8275-f78e6698477e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.221752622Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=64c5a8f8-5793-4213-8275-f78e6698477e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.221938612Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dcf47324a868c557381ea72f8c3bc0dce56b1bab36def329bfcff91a5c25df6,PodSandboxId:887781373b9c6a80d1f5dab89fb5c714863ed9729ad1d4cccb48ca6e4237da58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770128202047530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0dfdeb1-f567-41df-98c3-7987f0fd7b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 909a6a7e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478da5c49960ae3d9ce8ceebfef95d983383433a511bacf4b880e14255fcf23c,PodSandboxId:3f372f18c0800c7cf582878db05ab3229c1abda392a8445ba0b71cd3bb79ea06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710770126180946114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pp8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912b3f56-3df6-485f-a01a-60801b867b86,},Annotations:map[string]string{io.kubernetes.container.hash: dc0ca493,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bd4c6331e90782afb37b345b8e00d754b69c3b3d838b6da08111b9ea14a5cc,PodSandboxId:6083b00f89dc2e3e8d73bc820422bb6be8042b49e1eb358b9c90a8b70469a590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126301795954,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xdcht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf264558-6c11-44c9-82d6-ea23aea43dc9,},Annotations:map[string]string{io.kubernetes.container.hash: b0cf4d2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf55fa946a658cdc020920c0be13e1ce323a9ab0292a9b403fd7059000c70e1,PodSandboxId:f032f63a719f8348105bb201a8b835af4542fe3e8587eb3012a775367c461378,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126116394809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5qxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 164d2cc3-0891-4fcd-81bd-
34d7cf0c691c,},Annotations:map[string]string{io.kubernetes.container.hash: f164053c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d2d764cf683c43937ed6f1a2a4bc29ed977ee4e55873ded3c1a90fc325a68e,PodSandboxId:0c4384bffb72e76b865b7d57a32f42eaa40e53c876b3b4f3532a009ffcde0ae6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171077010655550461
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaeb8888551fdf1fa66251dad57f99eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f3c8b45c0b31e1a68ecd382756153908da62307b99de3ac00e94a9f0592120,PodSandboxId:b5b2d5706af19ec3b6793f4101d3c0ce85e939385bf55146c9e55fe8c32b97ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107701064
71259756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be94838dec3ae56e7ccef51c225c25dd,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d4752fdf387dcccc84aca83aec73463459220ac32fdb3510bd94ae21684775,PodSandboxId:ac791bedc626582dbf0e787f2f5b5fbf9626704820c08067bf84d08856c3f972,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,Creat
edAt:1710770106475618604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef18e5c2f20506f583d8e1ef75e4966,},Annotations:map[string]string{io.kubernetes.container.hash: b0cc1ab0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c396e8dd7d523dfd35c94244a7251f38ed0cad19ac6899846665673c43865d80,PodSandboxId:399f3c1da2a2e151138217c49ae862113fbc32c2bdeeb0d4afd579c2aee17257,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770
106456697248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7b4629155c46ec82f88394148a4486,},Annotations:map[string]string{io.kubernetes.container.hash: 50fc8f6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=64c5a8f8-5793-4213-8275-f78e6698477e name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.266429341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50313f91-d848-4710-97ae-a4379cff68d2 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.266535978Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50313f91-d848-4710-97ae-a4379cff68d2 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.267781342Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=616fcbb5-c29e-4e03-92dc-a19262915c61 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.268168208Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710771063268148803,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=616fcbb5-c29e-4e03-92dc-a19262915c61 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.269357736Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e633e10-350a-46c3-85ad-78cd41f4711c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.269414130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e633e10-350a-46c3-85ad-78cd41f4711c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.269628514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dcf47324a868c557381ea72f8c3bc0dce56b1bab36def329bfcff91a5c25df6,PodSandboxId:887781373b9c6a80d1f5dab89fb5c714863ed9729ad1d4cccb48ca6e4237da58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770128202047530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0dfdeb1-f567-41df-98c3-7987f0fd7b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 909a6a7e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478da5c49960ae3d9ce8ceebfef95d983383433a511bacf4b880e14255fcf23c,PodSandboxId:3f372f18c0800c7cf582878db05ab3229c1abda392a8445ba0b71cd3bb79ea06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710770126180946114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pp8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912b3f56-3df6-485f-a01a-60801b867b86,},Annotations:map[string]string{io.kubernetes.container.hash: dc0ca493,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bd4c6331e90782afb37b345b8e00d754b69c3b3d838b6da08111b9ea14a5cc,PodSandboxId:6083b00f89dc2e3e8d73bc820422bb6be8042b49e1eb358b9c90a8b70469a590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126301795954,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xdcht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf264558-6c11-44c9-82d6-ea23aea43dc9,},Annotations:map[string]string{io.kubernetes.container.hash: b0cf4d2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf55fa946a658cdc020920c0be13e1ce323a9ab0292a9b403fd7059000c70e1,PodSandboxId:f032f63a719f8348105bb201a8b835af4542fe3e8587eb3012a775367c461378,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126116394809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5qxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 164d2cc3-0891-4fcd-81bd-
34d7cf0c691c,},Annotations:map[string]string{io.kubernetes.container.hash: f164053c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d2d764cf683c43937ed6f1a2a4bc29ed977ee4e55873ded3c1a90fc325a68e,PodSandboxId:0c4384bffb72e76b865b7d57a32f42eaa40e53c876b3b4f3532a009ffcde0ae6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171077010655550461
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaeb8888551fdf1fa66251dad57f99eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f3c8b45c0b31e1a68ecd382756153908da62307b99de3ac00e94a9f0592120,PodSandboxId:b5b2d5706af19ec3b6793f4101d3c0ce85e939385bf55146c9e55fe8c32b97ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107701064
71259756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be94838dec3ae56e7ccef51c225c25dd,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d4752fdf387dcccc84aca83aec73463459220ac32fdb3510bd94ae21684775,PodSandboxId:ac791bedc626582dbf0e787f2f5b5fbf9626704820c08067bf84d08856c3f972,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,Creat
edAt:1710770106475618604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef18e5c2f20506f583d8e1ef75e4966,},Annotations:map[string]string{io.kubernetes.container.hash: b0cc1ab0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c396e8dd7d523dfd35c94244a7251f38ed0cad19ac6899846665673c43865d80,PodSandboxId:399f3c1da2a2e151138217c49ae862113fbc32c2bdeeb0d4afd579c2aee17257,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770
106456697248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7b4629155c46ec82f88394148a4486,},Annotations:map[string]string{io.kubernetes.container.hash: 50fc8f6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e633e10-350a-46c3-85ad-78cd41f4711c name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.308425668Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a5622525-9ef5-48ad-a836-8b9d6cc9b120 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.308541450Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a5622525-9ef5-48ad-a836-8b9d6cc9b120 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.309883568Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00034d6e-ed49-4b86-9206-2fa32154362b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.310488040Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710771063310463701,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00034d6e-ed49-4b86-9206-2fa32154362b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.310858316Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15a832f1-3755-4796-935b-87959914285b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.310936930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15a832f1-3755-4796-935b-87959914285b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.311308880Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9dcf47324a868c557381ea72f8c3bc0dce56b1bab36def329bfcff91a5c25df6,PodSandboxId:887781373b9c6a80d1f5dab89fb5c714863ed9729ad1d4cccb48ca6e4237da58,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770128202047530,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f0dfdeb1-f567-41df-98c3-7987f0fd7b2b,},Annotations:map[string]string{io.kubernetes.container.hash: 909a6a7e,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:478da5c49960ae3d9ce8ceebfef95d983383433a511bacf4b880e14255fcf23c,PodSandboxId:3f372f18c0800c7cf582878db05ab3229c1abda392a8445ba0b71cd3bb79ea06,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710770126180946114,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2pp8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 912b3f56-3df6-485f-a01a-60801b867b86,},Annotations:map[string]string{io.kubernetes.container.hash: dc0ca493,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessa
gePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19bd4c6331e90782afb37b345b8e00d754b69c3b3d838b6da08111b9ea14a5cc,PodSandboxId:6083b00f89dc2e3e8d73bc820422bb6be8042b49e1eb358b9c90a8b70469a590,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126301795954,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xdcht,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf264558-6c11-44c9-82d6-ea23aea43dc9,},Annotations:map[string]string{io.kubernetes.container.hash: b0cf4d2d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tc
p\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:caf55fa946a658cdc020920c0be13e1ce323a9ab0292a9b403fd7059000c70e1,PodSandboxId:f032f63a719f8348105bb201a8b835af4542fe3e8587eb3012a775367c461378,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770126116394809,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-j5qxm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 164d2cc3-0891-4fcd-81bd-
34d7cf0c691c,},Annotations:map[string]string{io.kubernetes.container.hash: f164053c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94d2d764cf683c43937ed6f1a2a4bc29ed977ee4e55873ded3c1a90fc325a68e,PodSandboxId:0c4384bffb72e76b865b7d57a32f42eaa40e53c876b3b4f3532a009ffcde0ae6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:171077010655550461
2,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eaeb8888551fdf1fa66251dad57f99eb,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14f3c8b45c0b31e1a68ecd382756153908da62307b99de3ac00e94a9f0592120,PodSandboxId:b5b2d5706af19ec3b6793f4101d3c0ce85e939385bf55146c9e55fe8c32b97ed,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:17107701064
71259756,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: be94838dec3ae56e7ccef51c225c25dd,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:75d4752fdf387dcccc84aca83aec73463459220ac32fdb3510bd94ae21684775,PodSandboxId:ac791bedc626582dbf0e787f2f5b5fbf9626704820c08067bf84d08856c3f972,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,Creat
edAt:1710770106475618604,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ef18e5c2f20506f583d8e1ef75e4966,},Annotations:map[string]string{io.kubernetes.container.hash: b0cc1ab0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c396e8dd7d523dfd35c94244a7251f38ed0cad19ac6899846665673c43865d80,PodSandboxId:399f3c1da2a2e151138217c49ae862113fbc32c2bdeeb0d4afd579c2aee17257,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770
106456697248,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-569210,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8b7b4629155c46ec82f88394148a4486,},Annotations:map[string]string{io.kubernetes.container.hash: 50fc8f6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15a832f1-3755-4796-935b-87959914285b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.318619096Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=40ca1a32-38c4-4ec9-a167-daf6211e157a name=/runtime.v1.RuntimeService/Status
	Mar 18 14:11:03 default-k8s-diff-port-569210 crio[695]: time="2024-03-18 14:11:03.318699452Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=40ca1a32-38c4-4ec9-a167-daf6211e157a name=/runtime.v1.RuntimeService/Status
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9dcf47324a868       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   887781373b9c6       storage-provisioner
	19bd4c6331e90       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   6083b00f89dc2       coredns-5dd5756b68-xdcht
	478da5c49960a       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   3f372f18c0800       kube-proxy-2pp8z
	caf55fa946a65       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   f032f63a719f8       coredns-5dd5756b68-j5qxm
	94d2d764cf683       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   0c4384bffb72e       kube-scheduler-default-k8s-diff-port-569210
	75d4752fdf387       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   ac791bedc6265       kube-apiserver-default-k8s-diff-port-569210
	14f3c8b45c0b3       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   b5b2d5706af19       kube-controller-manager-default-k8s-diff-port-569210
	c396e8dd7d523       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   399f3c1da2a2e       etcd-default-k8s-diff-port-569210
	
	
	==> coredns [19bd4c6331e90782afb37b345b8e00d754b69c3b3d838b6da08111b9ea14a5cc] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [caf55fa946a658cdc020920c0be13e1ce323a9ab0292a9b403fd7059000c70e1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-569210
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-569210
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=default-k8s-diff-port-569210
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_55_13_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:55:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-569210
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:11:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:10:51 +0000   Mon, 18 Mar 2024 13:55:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:10:51 +0000   Mon, 18 Mar 2024 13:55:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:10:51 +0000   Mon, 18 Mar 2024 13:55:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:10:51 +0000   Mon, 18 Mar 2024 13:55:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.3
	  Hostname:    default-k8s-diff-port-569210
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 452594090f9f4e72aa58a6b8f1d38292
	  System UUID:                45259409-0f9f-4e72-aa58-a6b8f1d38292
	  Boot ID:                    81ff9704-4e6c-45fb-831e-9145078fe898
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-j5qxm                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5dd5756b68-xdcht                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-default-k8s-diff-port-569210                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-569210             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-569210    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-2pp8z                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-default-k8s-diff-port-569210             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-ng9ww                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node default-k8s-diff-port-569210 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node default-k8s-diff-port-569210 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node default-k8s-diff-port-569210 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m   kubelet          Node default-k8s-diff-port-569210 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node default-k8s-diff-port-569210 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node default-k8s-diff-port-569210 event: Registered Node default-k8s-diff-port-569210 in Controller
	
	
	==> dmesg <==
	[  +5.048996] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.583982] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.733421] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar18 13:50] systemd-fstab-generator[613]: Ignoring "noauto" option for root device
	[  +0.064007] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.077519] systemd-fstab-generator[625]: Ignoring "noauto" option for root device
	[  +0.199814] systemd-fstab-generator[639]: Ignoring "noauto" option for root device
	[  +0.161254] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.296299] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[  +5.561320] systemd-fstab-generator[777]: Ignoring "noauto" option for root device
	[  +0.067077] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.367037] systemd-fstab-generator[910]: Ignoring "noauto" option for root device
	[  +4.556984] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.052086] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.796387] kauditd_printk_skb: 2 callbacks suppressed
	[Mar18 13:55] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.589350] systemd-fstab-generator[3427]: Ignoring "noauto" option for root device
	[  +4.530372] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.765669] systemd-fstab-generator[3752]: Ignoring "noauto" option for root device
	[ +12.441698] systemd-fstab-generator[3942]: Ignoring "noauto" option for root device
	[  +0.131416] kauditd_printk_skb: 14 callbacks suppressed
	[Mar18 13:56] kauditd_printk_skb: 78 callbacks suppressed
	[Mar18 14:10] hrtimer: interrupt took 3344712 ns
	
	
	==> etcd [c396e8dd7d523dfd35c94244a7251f38ed0cad19ac6899846665673c43865d80] <==
	{"level":"info","ts":"2024-03-18T14:09:20.995951Z","caller":"traceutil/trace.go:171","msg":"trace[1894209512] linearizableReadLoop","detail":"{readStateIndex:1307; appliedIndex:1306; }","duration":"290.090371ms","start":"2024-03-18T14:09:20.705807Z","end":"2024-03-18T14:09:20.995897Z","steps":["trace[1894209512] 'read index received'  (duration: 289.950023ms)","trace[1894209512] 'applied index is now lower than readState.Index'  (duration: 139.602µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T14:09:20.996755Z","caller":"traceutil/trace.go:171","msg":"trace[1999509117] transaction","detail":"{read_only:false; response_revision:1124; number_of_response:1; }","duration":"418.258594ms","start":"2024-03-18T14:09:20.578324Z","end":"2024-03-18T14:09:20.996583Z","steps":["trace[1999509117] 'process raft request'  (duration: 417.480468ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:09:20.996781Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"290.867224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T14:09:20.996987Z","caller":"traceutil/trace.go:171","msg":"trace[1202470942] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1124; }","duration":"291.175791ms","start":"2024-03-18T14:09:20.705785Z","end":"2024-03-18T14:09:20.996961Z","steps":["trace[1202470942] 'agreement among raft nodes before linearized reading'  (duration: 290.837751ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:09:20.997616Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T14:09:20.578301Z","time spent":"418.549592ms","remote":"127.0.0.1:59138","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":601,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-569210\" mod_revision:1116 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-569210\" value_size:532 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-569210\" > >"}
	{"level":"warn","ts":"2024-03-18T14:09:21.315826Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"104.906989ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.3\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2024-03-18T14:09:21.315962Z","caller":"traceutil/trace.go:171","msg":"trace[885452381] range","detail":"{range_begin:/registry/masterleases/192.168.61.3; range_end:; response_count:1; response_revision:1124; }","duration":"105.058299ms","start":"2024-03-18T14:09:21.210891Z","end":"2024-03-18T14:09:21.315949Z","steps":["trace[885452381] 'range keys from in-memory index tree'  (duration: 104.801489ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:09:21.316536Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.644357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2024-03-18T14:09:21.31662Z","caller":"traceutil/trace.go:171","msg":"trace[503719383] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1124; }","duration":"125.734559ms","start":"2024-03-18T14:09:21.190873Z","end":"2024-03-18T14:09:21.316608Z","steps":["trace[503719383] 'range keys from in-memory index tree'  (duration: 125.540032ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T14:09:21.448793Z","caller":"traceutil/trace.go:171","msg":"trace[44113247] transaction","detail":"{read_only:false; response_revision:1125; number_of_response:1; }","duration":"125.009663ms","start":"2024-03-18T14:09:21.32376Z","end":"2024-03-18T14:09:21.448769Z","steps":["trace[44113247] 'process raft request'  (duration: 124.866146ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:09:21.738652Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.97482ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1711242865594722277 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.3\" mod_revision:1118 > success:<request_put:<key:\"/registry/masterleases/192.168.61.3\" value_size:65 lease:1711242865594722275 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.3\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-18T14:09:21.739106Z","caller":"traceutil/trace.go:171","msg":"trace[375500780] transaction","detail":"{read_only:false; response_revision:1126; number_of_response:1; }","duration":"266.096119ms","start":"2024-03-18T14:09:21.472962Z","end":"2024-03-18T14:09:21.739058Z","steps":["trace[375500780] 'process raft request'  (duration: 137.203148ms)","trace[375500780] 'compare'  (duration: 127.698472ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-18T14:09:57.787922Z","caller":"traceutil/trace.go:171","msg":"trace[1679852243] transaction","detail":"{read_only:false; response_revision:1153; number_of_response:1; }","duration":"102.473267ms","start":"2024-03-18T14:09:57.685417Z","end":"2024-03-18T14:09:57.78789Z","steps":["trace[1679852243] 'process raft request'  (duration: 102.054385ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T14:10:07.595349Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":918}
	{"level":"info","ts":"2024-03-18T14:10:07.597487Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":918,"took":"1.794227ms","hash":3800281035}
	{"level":"info","ts":"2024-03-18T14:10:07.597556Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3800281035,"revision":918,"compact-revision":675}
	{"level":"warn","ts":"2024-03-18T14:10:28.978728Z","caller":"wal/wal.go:805","msg":"slow fdatasync","took":"1.018575866s","expected-duration":"1s"}
	{"level":"info","ts":"2024-03-18T14:10:28.979736Z","caller":"traceutil/trace.go:171","msg":"trace[824890194] transaction","detail":"{read_only:false; response_revision:1178; number_of_response:1; }","duration":"1.019706914s","start":"2024-03-18T14:10:27.959989Z","end":"2024-03-18T14:10:28.979696Z","steps":["trace[824890194] 'process raft request'  (duration: 1.019552474s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:10:28.980111Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T14:10:27.959974Z","time spent":"1.019847357s","remote":"127.0.0.1:59020","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1177 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-18T14:10:28.981358Z","caller":"traceutil/trace.go:171","msg":"trace[566498989] linearizableReadLoop","detail":"{readStateIndex:1376; appliedIndex:1376; }","duration":"279.63927ms","start":"2024-03-18T14:10:28.701709Z","end":"2024-03-18T14:10:28.981348Z","steps":["trace[566498989] 'read index received'  (duration: 279.635853ms)","trace[566498989] 'applied index is now lower than readState.Index'  (duration: 2.264µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T14:10:28.981465Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"279.767885ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T14:10:28.981574Z","caller":"traceutil/trace.go:171","msg":"trace[386352507] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1178; }","duration":"279.883147ms","start":"2024-03-18T14:10:28.701678Z","end":"2024-03-18T14:10:28.981561Z","steps":["trace[386352507] 'agreement among raft nodes before linearized reading'  (duration: 279.733183ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:10:29.238855Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"132.650657ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1711242865594722600 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-hhaf2onoeyizq5yuqwbdbq2hxi\" mod_revision:1171 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-hhaf2onoeyizq5yuqwbdbq2hxi\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-hhaf2onoeyizq5yuqwbdbq2hxi\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-18T14:10:29.238979Z","caller":"traceutil/trace.go:171","msg":"trace[1333136979] transaction","detail":"{read_only:false; response_revision:1179; number_of_response:1; }","duration":"419.040983ms","start":"2024-03-18T14:10:28.819918Z","end":"2024-03-18T14:10:29.238959Z","steps":["trace[1333136979] 'process raft request'  (duration: 286.19913ms)","trace[1333136979] 'compare'  (duration: 132.450497ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T14:10:29.239062Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T14:10:28.819896Z","time spent":"419.122649ms","remote":"127.0.0.1:59138","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-hhaf2onoeyizq5yuqwbdbq2hxi\" mod_revision:1171 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-hhaf2onoeyizq5yuqwbdbq2hxi\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-hhaf2onoeyizq5yuqwbdbq2hxi\" > >"}
	
	
	==> kernel <==
	 14:11:03 up 21 min,  0 users,  load average: 0.24, 0.33, 0.30
	Linux default-k8s-diff-port-569210 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [75d4752fdf387dcccc84aca83aec73463459220ac32fdb3510bd94ae21684775] <==
	E0318 14:08:10.576601       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:08:10.576654       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:09:09.453411       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 14:09:21.741111       1 trace.go:236] Trace[1592689342]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.61.3,type:*v1.Endpoints,resource:apiServerIPInfo (18-Mar-2024 14:09:21.210) (total time: 530ms):
	Trace[1592689342]: ---"initial value restored" 114ms (14:09:21.324)
	Trace[1592689342]: ---"Transaction prepared" 148ms (14:09:21.472)
	Trace[1592689342]: ---"Txn call completed" 267ms (14:09:21.740)
	Trace[1592689342]: [530.159755ms] [530.159755ms] END
	I0318 14:10:09.453740       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:10:09.578371       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:10:09.578524       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:10:09.578870       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:10:10.578824       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:10:10.578904       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:10:10.578926       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:10:10.579030       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:10:10.579151       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:10:10.580118       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:10:28.983320       1 trace.go:236] Trace[1379499678]: "Update" accept:application/json, */*,audit-id:314197f3-fbfe-42ff-9b64-8ff838820874,client:192.168.61.3,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (18-Mar-2024 14:10:27.958) (total time: 1024ms):
	Trace[1379499678]: ["GuaranteedUpdate etcd3" audit-id:314197f3-fbfe-42ff-9b64-8ff838820874,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 1024ms (14:10:27.958)
	Trace[1379499678]:  ---"Txn call completed" 1022ms (14:10:28.982)]
	Trace[1379499678]: [1.024557518s] [1.024557518s] END
	
	
	==> kube-controller-manager [14f3c8b45c0b31e1a68ecd382756153908da62307b99de3ac00e94a9f0592120] <==
	I0318 14:05:25.110828       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:05:54.654069       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:05:55.120482       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:06:24.660507       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:06:25.131014       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:06:36.893622       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="246.292µs"
	I0318 14:06:47.884162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="90.044µs"
	E0318 14:06:54.667106       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:06:55.139098       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:07:24.673994       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:07:25.148943       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:07:54.680294       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:07:55.158591       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:08:24.687370       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:08:25.168827       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:08:54.698319       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:08:55.178392       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:09:24.708080       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:09:25.188632       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:09:54.714521       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:09:55.200122       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:10:24.721609       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:10:25.210558       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:10:54.732804       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:10:55.220545       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [478da5c49960ae3d9ce8ceebfef95d983383433a511bacf4b880e14255fcf23c] <==
	I0318 13:55:26.905082       1 server_others.go:69] "Using iptables proxy"
	I0318 13:55:26.942887       1 node.go:141] Successfully retrieved node IP: 192.168.61.3
	I0318 13:55:27.115653       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:55:27.115704       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:55:27.142655       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:55:27.144075       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:55:27.144326       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:55:27.144339       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:55:27.158732       1 config.go:315] "Starting node config controller"
	I0318 13:55:27.158766       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:55:27.162325       1 config.go:188] "Starting service config controller"
	I0318 13:55:27.162415       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:55:27.162438       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:55:27.162442       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:55:27.259250       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:55:27.263490       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:55:27.263549       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [94d2d764cf683c43937ed6f1a2a4bc29ed977ee4e55873ded3c1a90fc325a68e] <==
	W0318 13:55:09.623938       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:55:09.624056       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:55:09.624295       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:55:09.626801       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:55:09.626004       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:55:09.627136       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 13:55:09.626058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:55:09.627387       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:55:09.626141       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:55:09.627477       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 13:55:09.626597       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:55:09.627695       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:55:10.596785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:55:10.596884       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:55:10.656032       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:55:10.656162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:55:10.747503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0318 13:55:10.747558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0318 13:55:10.777289       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0318 13:55:10.777350       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0318 13:55:10.783163       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:55:10.783266       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0318 13:55:10.797573       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0318 13:55:10.797628       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0318 13:55:12.601416       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:08:12 default-k8s-diff-port-569210 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:08:17 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:08:17.865323    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:08:28 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:08:28.864037    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:08:41 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:08:41.864523    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:08:53 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:08:53.865029    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:09:06 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:09:06.863801    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:09:12 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:09:12.914459    3758 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:09:12 default-k8s-diff-port-569210 kubelet[3758]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:09:12 default-k8s-diff-port-569210 kubelet[3758]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:09:12 default-k8s-diff-port-569210 kubelet[3758]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:09:12 default-k8s-diff-port-569210 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:09:19 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:09:19.864692    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:09:32 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:09:32.867763    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:09:43 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:09:43.863694    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:09:58 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:09:58.864006    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:10:10 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:10:10.866025    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:10:12 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:10:12.915828    3758 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:10:12 default-k8s-diff-port-569210 kubelet[3758]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:10:12 default-k8s-diff-port-569210 kubelet[3758]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:10:12 default-k8s-diff-port-569210 kubelet[3758]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:10:12 default-k8s-diff-port-569210 kubelet[3758]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:10:21 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:10:21.864806    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:10:33 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:10:33.864546    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:10:47 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:10:47.864245    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	Mar 18 14:11:02 default-k8s-diff-port-569210 kubelet[3758]: E0318 14:11:02.864021    3758 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ng9ww" podUID="4c8209dc-b6ba-427d-ba32-0da4993b0902"
	
	
	==> storage-provisioner [9dcf47324a868c557381ea72f8c3bc0dce56b1bab36def329bfcff91a5c25df6] <==
	I0318 13:55:28.320506       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 13:55:28.336971       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 13:55:28.337009       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 13:55:28.356231       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 13:55:28.356418       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-569210_2f961dda-9106-4ac5-ba06-b638d34747c6!
	I0318 13:55:28.358832       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1ceb429c-3e85-449d-9d24-79a90659fe08", APIVersion:"v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-569210_2f961dda-9106-4ac5-ba06-b638d34747c6 became leader
	I0318 13:55:28.458973       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-569210_2f961dda-9106-4ac5-ba06-b638d34747c6!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-569210 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ng9ww
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-569210 describe pod metrics-server-57f55c9bc5-ng9ww
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-569210 describe pod metrics-server-57f55c9bc5-ng9ww: exit status 1 (62.055559ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ng9ww" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-569210 describe pod metrics-server-57f55c9bc5-ng9ww: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (391.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (343.49s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-173036 -n embed-certs-173036
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-18 14:10:39.577540863 +0000 UTC m=+6916.964455137
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-173036 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-173036 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.637µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-173036 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-173036 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-173036 logs -n 25: (1.589617297s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:42 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-173036            | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-537236             | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC | 18 Mar 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-569210  | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC | 18 Mar 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-909137        | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-173036                 | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-537236                  | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-909137             | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-569210       | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:55 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:08 UTC |
	| start   | -p newest-cni-572909 --memory=2200 --alsologtostderr   | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:08 UTC | 18 Mar 24 14:09 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 14:09 UTC | 18 Mar 24 14:09 UTC |
	| start   | -p auto-990886 --memory=3072                           | auto-990886                  | jenkins | v1.32.0 | 18 Mar 24 14:09 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-572909             | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:09 UTC | 18 Mar 24 14:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-572909                                   | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:09 UTC | 18 Mar 24 14:10 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-572909                  | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC | 18 Mar 24 14:10 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-572909 --memory=2200 --alsologtostderr   | newest-cni-572909            | jenkins | v1.32.0 | 18 Mar 24 14:10 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 14:10:00
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 14:10:00.922089 1163548 out.go:291] Setting OutFile to fd 1 ...
	I0318 14:10:00.922224 1163548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:10:00.922237 1163548 out.go:304] Setting ErrFile to fd 2...
	I0318 14:10:00.922244 1163548 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 14:10:00.922505 1163548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 14:10:00.923070 1163548 out.go:298] Setting JSON to false
	I0318 14:10:00.924109 1163548 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":21148,"bootTime":1710749853,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 14:10:00.924531 1163548 start.go:139] virtualization: kvm guest
	I0318 14:10:00.943642 1163548 out.go:177] * [newest-cni-572909] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 14:10:00.945052 1163548 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 14:10:00.946300 1163548 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 14:10:00.945094 1163548 notify.go:220] Checking for updates...
	I0318 14:10:00.948769 1163548 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 14:10:00.950087 1163548 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 14:10:00.951481 1163548 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 14:10:00.952731 1163548 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 14:10:00.954540 1163548 config.go:182] Loaded profile config "newest-cni-572909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:10:00.954988 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:00.955029 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:00.971667 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42973
	I0318 14:10:00.972127 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:00.972732 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:00.972763 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:00.973195 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:00.973424 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:00.973691 1163548 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 14:10:00.974008 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:00.974045 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:00.989321 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0318 14:10:00.989892 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:00.990503 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:00.990539 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:00.990886 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:00.991085 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:01.029883 1163548 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 14:10:01.031390 1163548 start.go:297] selected driver: kvm2
	I0318 14:10:01.031414 1163548 start.go:901] validating driver "kvm2" against &{Name:newest-cni-572909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:newest-cni-572909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods
:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:10:01.031590 1163548 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 14:10:01.032493 1163548 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:10:01.032568 1163548 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 14:10:01.048646 1163548 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 14:10:01.049078 1163548 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 14:10:01.049126 1163548 cni.go:84] Creating CNI manager for ""
	I0318 14:10:01.049139 1163548 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:10:01.049203 1163548 start.go:340] cluster config:
	{Name:newest-cni-572909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-572909 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress
: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:10:01.049346 1163548 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 14:10:01.051406 1163548 out.go:177] * Starting "newest-cni-572909" primary control-plane node in "newest-cni-572909" cluster
	I0318 14:10:01.052989 1163548 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:10:01.053024 1163548 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 14:10:01.053031 1163548 cache.go:56] Caching tarball of preloaded images
	I0318 14:10:01.053111 1163548 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 14:10:01.053124 1163548 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0318 14:10:01.053220 1163548 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/config.json ...
	I0318 14:10:01.053406 1163548 start.go:360] acquireMachinesLock for newest-cni-572909: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 14:10:01.053445 1163548 start.go:364] duration metric: took 21.193µs to acquireMachinesLock for "newest-cni-572909"
	I0318 14:10:01.053460 1163548 start.go:96] Skipping create...Using existing machine configuration
	I0318 14:10:01.053470 1163548 fix.go:54] fixHost starting: 
	I0318 14:10:01.053805 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:01.053841 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:01.069553 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33101
	I0318 14:10:01.070042 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:01.070555 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:01.070579 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:01.070893 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:01.071137 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:01.071326 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetState
	I0318 14:10:01.073081 1163548 fix.go:112] recreateIfNeeded on newest-cni-572909: state=Stopped err=<nil>
	I0318 14:10:01.073110 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	W0318 14:10:01.073286 1163548 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 14:10:01.075292 1163548 out.go:177] * Restarting existing kvm2 VM for "newest-cni-572909" ...
	I0318 14:09:57.638335 1163134 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.134055432s)
	I0318 14:09:57.638369 1163134 crio.go:451] duration metric: took 3.134169852s to extract the tarball
	I0318 14:09:57.638387 1163134 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:09:57.681529 1163134 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:09:57.735858 1163134 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:09:57.735888 1163134 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:09:57.735898 1163134 kubeadm.go:928] updating node { 192.168.39.123 8443 v1.28.4 crio true true} ...
	I0318 14:09:57.736036 1163134 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-990886 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:auto-990886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:09:57.736133 1163134 ssh_runner.go:195] Run: crio config
	I0318 14:09:57.791198 1163134 cni.go:84] Creating CNI manager for ""
	I0318 14:09:57.791222 1163134 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:09:57.791235 1163134 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 14:09:57.791257 1163134 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.123 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-990886 NodeName:auto-990886 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:09:57.791428 1163134 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-990886"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:09:57.791510 1163134 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 14:09:57.804466 1163134 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:09:57.804569 1163134 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:09:57.816287 1163134 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0318 14:09:57.838182 1163134 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 14:09:57.859090 1163134 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0318 14:09:57.879878 1163134 ssh_runner.go:195] Run: grep 192.168.39.123	control-plane.minikube.internal$ /etc/hosts
	I0318 14:09:57.884913 1163134 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:09:57.899538 1163134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:09:58.030357 1163134 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:09:58.058539 1163134 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886 for IP: 192.168.39.123
	I0318 14:09:58.058574 1163134 certs.go:194] generating shared ca certs ...
	I0318 14:09:58.058612 1163134 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:58.058828 1163134 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 14:09:58.058910 1163134 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 14:09:58.058928 1163134 certs.go:256] generating profile certs ...
	I0318 14:09:58.059005 1163134 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/client.key
	I0318 14:09:58.059025 1163134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/client.crt with IP's: []
	I0318 14:09:58.379835 1163134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/client.crt ...
	I0318 14:09:58.379874 1163134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/client.crt: {Name:mk4a51a1345b1a9fc31a968ce455a0815ddb1a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:58.380048 1163134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/client.key ...
	I0318 14:09:58.380061 1163134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/client.key: {Name:mk71723b2b83fb515c615e7d53d62eff8e235072 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:58.380152 1163134 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.key.0be95064
	I0318 14:09:58.380169 1163134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.crt.0be95064 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.123]
	I0318 14:09:58.629600 1163134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.crt.0be95064 ...
	I0318 14:09:58.629635 1163134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.crt.0be95064: {Name:mkace35035697e092c4e61cbddc69e8b953c38e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:58.629833 1163134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.key.0be95064 ...
	I0318 14:09:58.629857 1163134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.key.0be95064: {Name:mk28a1dfb36dcb1fd92dc380ce8865bdb7c729a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:58.629965 1163134 certs.go:381] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.crt.0be95064 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.crt
	I0318 14:09:58.630077 1163134 certs.go:385] copying /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.key.0be95064 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.key
	I0318 14:09:58.630167 1163134 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/proxy-client.key
	I0318 14:09:58.630190 1163134 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/proxy-client.crt with IP's: []
	I0318 14:09:58.753215 1163134 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/proxy-client.crt ...
	I0318 14:09:58.753245 1163134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/proxy-client.crt: {Name:mkbf46cb043a7f89b3065fed079875e0d92ee785 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:58.753408 1163134 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/proxy-client.key ...
	I0318 14:09:58.753419 1163134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/proxy-client.key: {Name:mkc5b32d4f77204dbcb2680ffdfaf1d68716e15a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:09:58.753584 1163134 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 14:09:58.753619 1163134 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 14:09:58.753629 1163134 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 14:09:58.753648 1163134 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:09:58.753670 1163134 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:09:58.753691 1163134 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 14:09:58.753726 1163134 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 14:09:58.754351 1163134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:09:58.786645 1163134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:09:58.813385 1163134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:09:58.840205 1163134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:09:58.868432 1163134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I0318 14:09:58.900020 1163134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 14:09:58.930257 1163134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:09:58.961346 1163134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/auto-990886/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:09:58.991115 1163134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 14:09:59.020729 1163134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 14:09:59.052603 1163134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:09:59.083210 1163134 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:09:59.104011 1163134 ssh_runner.go:195] Run: openssl version
	I0318 14:09:59.112105 1163134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:09:59.127918 1163134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:09:59.134708 1163134 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:09:59.134779 1163134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:09:59.143030 1163134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:09:59.175650 1163134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 14:09:59.190700 1163134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 14:09:59.197533 1163134 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 14:09:59.197609 1163134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 14:09:59.205116 1163134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 14:09:59.219422 1163134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 14:09:59.233931 1163134 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 14:09:59.239211 1163134 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 14:09:59.239279 1163134 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 14:09:59.245821 1163134 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:09:59.259654 1163134 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:09:59.264831 1163134 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0318 14:09:59.264908 1163134 kubeadm.go:391] StartCluster: {Name:auto-990886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clu
sterName:auto-990886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:09:59.265017 1163134 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:09:59.265079 1163134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:09:59.305007 1163134 cri.go:89] found id: ""
	I0318 14:09:59.305094 1163134 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0318 14:09:59.317993 1163134 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:09:59.331348 1163134 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:09:59.343338 1163134 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:09:59.343364 1163134 kubeadm.go:156] found existing configuration files:
	
	I0318 14:09:59.343422 1163134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:09:59.354797 1163134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:09:59.354870 1163134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:09:59.366763 1163134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:09:59.378064 1163134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:09:59.378137 1163134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:09:59.391616 1163134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:09:59.405573 1163134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:09:59.405653 1163134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:09:59.419808 1163134 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:09:59.433187 1163134 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:09:59.433257 1163134 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:09:59.446681 1163134 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 14:09:59.677796 1163134 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 14:10:01.076602 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .Start
	I0318 14:10:01.076771 1163548 main.go:141] libmachine: (newest-cni-572909) Ensuring networks are active...
	I0318 14:10:01.077614 1163548 main.go:141] libmachine: (newest-cni-572909) Ensuring network default is active
	I0318 14:10:01.077928 1163548 main.go:141] libmachine: (newest-cni-572909) Ensuring network mk-newest-cni-572909 is active
	I0318 14:10:01.078340 1163548 main.go:141] libmachine: (newest-cni-572909) Getting domain xml...
	I0318 14:10:01.079103 1163548 main.go:141] libmachine: (newest-cni-572909) Creating domain...
	I0318 14:10:02.401728 1163548 main.go:141] libmachine: (newest-cni-572909) Waiting to get IP...
	I0318 14:10:02.402585 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:02.403011 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:02.403077 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:02.402979 1163583 retry.go:31] will retry after 214.300941ms: waiting for machine to come up
	I0318 14:10:02.619603 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:02.620163 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:02.620193 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:02.620127 1163583 retry.go:31] will retry after 239.060098ms: waiting for machine to come up
	I0318 14:10:02.860705 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:02.861197 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:02.861237 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:02.861138 1163583 retry.go:31] will retry after 341.948724ms: waiting for machine to come up
	I0318 14:10:03.204741 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:03.205248 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:03.205279 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:03.205199 1163583 retry.go:31] will retry after 399.316609ms: waiting for machine to come up
	I0318 14:10:03.606797 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:03.607375 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:03.607400 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:03.607321 1163583 retry.go:31] will retry after 698.484539ms: waiting for machine to come up
	I0318 14:10:04.307936 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:04.308626 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:04.308656 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:04.308572 1163583 retry.go:31] will retry after 844.943006ms: waiting for machine to come up
	I0318 14:10:05.155788 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:05.156401 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:05.156437 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:05.156335 1163583 retry.go:31] will retry after 866.169726ms: waiting for machine to come up
	I0318 14:10:10.623218 1163134 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 14:10:10.623314 1163134 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 14:10:10.623447 1163134 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 14:10:10.623581 1163134 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 14:10:10.623718 1163134 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 14:10:10.623831 1163134 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 14:10:10.625592 1163134 out.go:204]   - Generating certificates and keys ...
	I0318 14:10:10.625710 1163134 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 14:10:10.625834 1163134 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 14:10:10.625951 1163134 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0318 14:10:10.626034 1163134 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0318 14:10:10.626127 1163134 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0318 14:10:10.626200 1163134 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0318 14:10:10.626275 1163134 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0318 14:10:10.626462 1163134 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [auto-990886 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I0318 14:10:10.626547 1163134 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0318 14:10:10.626725 1163134 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [auto-990886 localhost] and IPs [192.168.39.123 127.0.0.1 ::1]
	I0318 14:10:10.626825 1163134 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0318 14:10:10.626911 1163134 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0318 14:10:10.626972 1163134 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0318 14:10:10.627057 1163134 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 14:10:10.627137 1163134 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 14:10:10.627217 1163134 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 14:10:10.627334 1163134 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 14:10:10.627428 1163134 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 14:10:10.627550 1163134 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 14:10:10.627653 1163134 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 14:10:10.629310 1163134 out.go:204]   - Booting up control plane ...
	I0318 14:10:10.629436 1163134 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 14:10:10.629531 1163134 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 14:10:10.629617 1163134 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 14:10:10.629747 1163134 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 14:10:10.629866 1163134 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 14:10:10.629924 1163134 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 14:10:10.630115 1163134 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 14:10:10.630212 1163134 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004010 seconds
	I0318 14:10:10.630345 1163134 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 14:10:10.630506 1163134 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 14:10:10.630581 1163134 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 14:10:10.630794 1163134 kubeadm.go:309] [mark-control-plane] Marking the node auto-990886 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 14:10:10.630866 1163134 kubeadm.go:309] [bootstrap-token] Using token: hpq0xf.2oy0bgk86fofnbsa
	I0318 14:10:10.632413 1163134 out.go:204]   - Configuring RBAC rules ...
	I0318 14:10:10.632562 1163134 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 14:10:10.632670 1163134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 14:10:10.632870 1163134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 14:10:10.633060 1163134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 14:10:10.633222 1163134 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 14:10:10.633355 1163134 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 14:10:10.633531 1163134 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 14:10:10.633586 1163134 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 14:10:10.633657 1163134 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 14:10:10.633668 1163134 kubeadm.go:309] 
	I0318 14:10:10.633757 1163134 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 14:10:10.633766 1163134 kubeadm.go:309] 
	I0318 14:10:10.633878 1163134 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 14:10:10.633901 1163134 kubeadm.go:309] 
	I0318 14:10:10.633937 1163134 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 14:10:10.634019 1163134 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 14:10:10.634109 1163134 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 14:10:10.634127 1163134 kubeadm.go:309] 
	I0318 14:10:10.634181 1163134 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 14:10:10.634191 1163134 kubeadm.go:309] 
	I0318 14:10:10.634266 1163134 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 14:10:10.634284 1163134 kubeadm.go:309] 
	I0318 14:10:10.634361 1163134 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 14:10:10.634461 1163134 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 14:10:10.634558 1163134 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 14:10:10.634569 1163134 kubeadm.go:309] 
	I0318 14:10:10.634681 1163134 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 14:10:10.634796 1163134 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 14:10:10.634809 1163134 kubeadm.go:309] 
	I0318 14:10:10.634910 1163134 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token hpq0xf.2oy0bgk86fofnbsa \
	I0318 14:10:10.635042 1163134 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 14:10:10.635074 1163134 kubeadm.go:309] 	--control-plane 
	I0318 14:10:10.635083 1163134 kubeadm.go:309] 
	I0318 14:10:10.635185 1163134 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 14:10:10.635198 1163134 kubeadm.go:309] 
	I0318 14:10:10.635296 1163134 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token hpq0xf.2oy0bgk86fofnbsa \
	I0318 14:10:10.635447 1163134 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 14:10:10.635462 1163134 cni.go:84] Creating CNI manager for ""
	I0318 14:10:10.635471 1163134 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:10:10.637150 1163134 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:10:06.023981 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:06.024584 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:06.024618 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:06.024517 1163583 retry.go:31] will retry after 1.381865185s: waiting for machine to come up
	I0318 14:10:07.408656 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:07.409289 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:07.409330 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:07.409223 1163583 retry.go:31] will retry after 1.819899582s: waiting for machine to come up
	I0318 14:10:09.230766 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:09.231374 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:09.231404 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:09.231339 1163583 retry.go:31] will retry after 1.986231824s: waiting for machine to come up
	I0318 14:10:10.638525 1163134 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:10:10.687033 1163134 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:10:10.774411 1163134 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:10:10.774487 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:10.774535 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-990886 minikube.k8s.io/updated_at=2024_03_18T14_10_10_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=auto-990886 minikube.k8s.io/primary=true
	I0318 14:10:11.126403 1163134 ops.go:34] apiserver oom_adj: -16
	I0318 14:10:11.126559 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:11.219602 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:11.220044 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:11.220107 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:11.220010 1163583 retry.go:31] will retry after 2.11844215s: waiting for machine to come up
	I0318 14:10:13.341387 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:13.341891 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:13.341921 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:13.341840 1163583 retry.go:31] will retry after 2.49389698s: waiting for machine to come up
	I0318 14:10:15.837753 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:15.838221 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | unable to find current IP address of domain newest-cni-572909 in network mk-newest-cni-572909
	I0318 14:10:15.838262 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | I0318 14:10:15.838158 1163583 retry.go:31] will retry after 3.996375459s: waiting for machine to come up
	I0318 14:10:11.627303 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:12.127242 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:12.627370 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:13.126880 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:13.626689 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:14.127649 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:14.626835 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:15.127009 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:15.627355 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:16.126678 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:19.839016 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:19.839575 1163548 main.go:141] libmachine: (newest-cni-572909) Found IP for machine: 192.168.72.13
	I0318 14:10:19.839597 1163548 main.go:141] libmachine: (newest-cni-572909) Reserving static IP address...
	I0318 14:10:19.839629 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has current primary IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:19.840101 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "newest-cni-572909", mac: "52:54:00:a2:ca:ad", ip: "192.168.72.13"} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:19.840135 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | skip adding static IP to network mk-newest-cni-572909 - found existing host DHCP lease matching {name: "newest-cni-572909", mac: "52:54:00:a2:ca:ad", ip: "192.168.72.13"}
	I0318 14:10:19.840150 1163548 main.go:141] libmachine: (newest-cni-572909) Reserved static IP address: 192.168.72.13
	I0318 14:10:19.840166 1163548 main.go:141] libmachine: (newest-cni-572909) Waiting for SSH to be available...
	I0318 14:10:19.840181 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | Getting to WaitForSSH function...
	I0318 14:10:19.842304 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:19.842685 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:19.842721 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:19.842812 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | Using SSH client type: external
	I0318 14:10:19.842846 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa (-rw-------)
	I0318 14:10:19.842871 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.13 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 14:10:19.842880 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | About to run SSH command:
	I0318 14:10:19.842897 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | exit 0
	I0318 14:10:19.976660 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | SSH cmd err, output: <nil>: 
	I0318 14:10:19.977075 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetConfigRaw
	I0318 14:10:19.977751 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetIP
	I0318 14:10:19.980416 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:19.980781 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:19.980827 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:19.981009 1163548 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/config.json ...
	I0318 14:10:19.981229 1163548 machine.go:94] provisionDockerMachine start ...
	I0318 14:10:19.981249 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:19.981483 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:19.983893 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:19.984307 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:19.984354 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:19.984491 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:19.984642 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:19.984780 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:19.984948 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:19.985128 1163548 main.go:141] libmachine: Using SSH client type: native
	I0318 14:10:19.985365 1163548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0318 14:10:19.985379 1163548 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 14:10:20.097525 1163548 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 14:10:20.097562 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetMachineName
	I0318 14:10:20.097850 1163548 buildroot.go:166] provisioning hostname "newest-cni-572909"
	I0318 14:10:20.097884 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetMachineName
	I0318 14:10:20.098082 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:20.101040 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.101365 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:20.101397 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.101558 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:20.101776 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:20.101960 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:20.102106 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:20.102280 1163548 main.go:141] libmachine: Using SSH client type: native
	I0318 14:10:20.102441 1163548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0318 14:10:20.102462 1163548 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-572909 && echo "newest-cni-572909" | sudo tee /etc/hostname
	I0318 14:10:20.234734 1163548 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-572909
	
	I0318 14:10:20.234772 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:20.237735 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.238093 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:20.238181 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.238347 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:20.238614 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:20.238796 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:20.238926 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:20.239104 1163548 main.go:141] libmachine: Using SSH client type: native
	I0318 14:10:20.239343 1163548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0318 14:10:20.239367 1163548 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-572909' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-572909/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-572909' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 14:10:20.368301 1163548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 14:10:20.368378 1163548 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 14:10:20.368447 1163548 buildroot.go:174] setting up certificates
	I0318 14:10:20.368462 1163548 provision.go:84] configureAuth start
	I0318 14:10:20.368484 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetMachineName
	I0318 14:10:20.368840 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetIP
	I0318 14:10:20.371935 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.372341 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:20.372379 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.372545 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:20.374958 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.375261 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:20.375284 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.375405 1163548 provision.go:143] copyHostCerts
	I0318 14:10:20.375474 1163548 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 14:10:20.375485 1163548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 14:10:20.375555 1163548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 14:10:20.375712 1163548 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 14:10:20.375724 1163548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 14:10:20.375746 1163548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 14:10:20.375817 1163548 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 14:10:20.375828 1163548 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 14:10:20.375845 1163548 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 14:10:20.375917 1163548 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.newest-cni-572909 san=[127.0.0.1 192.168.72.13 localhost minikube newest-cni-572909]
	I0318 14:10:20.495684 1163548 provision.go:177] copyRemoteCerts
	I0318 14:10:20.495760 1163548 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 14:10:20.495789 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:20.498744 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.499098 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:20.499121 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.499282 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:20.499473 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:20.499636 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:20.499792 1163548 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:10:20.587990 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 14:10:20.616676 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 14:10:20.648647 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 14:10:20.682600 1163548 provision.go:87] duration metric: took 314.12081ms to configureAuth
	I0318 14:10:20.682632 1163548 buildroot.go:189] setting minikube options for container-runtime
	I0318 14:10:20.682849 1163548 config.go:182] Loaded profile config "newest-cni-572909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:10:20.682973 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:20.685925 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.686426 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:20.686486 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.686657 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:20.686884 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:20.687078 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:20.687240 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:20.687448 1163548 main.go:141] libmachine: Using SSH client type: native
	I0318 14:10:20.687664 1163548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0318 14:10:20.687684 1163548 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 14:10:16.627389 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:17.127550 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:17.627126 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:18.126889 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:18.626820 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:19.126997 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:19.627010 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:20.127414 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:20.627433 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:21.127094 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:20.989266 1163548 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 14:10:20.989297 1163548 machine.go:97] duration metric: took 1.008054457s to provisionDockerMachine
	I0318 14:10:20.989310 1163548 start.go:293] postStartSetup for "newest-cni-572909" (driver="kvm2")
	I0318 14:10:20.989323 1163548 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 14:10:20.989344 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:20.989739 1163548 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 14:10:20.989779 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:20.992491 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.993002 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:20.993040 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:20.993167 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:20.993391 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:20.993571 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:20.993765 1163548 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:10:21.083180 1163548 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 14:10:21.088103 1163548 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 14:10:21.088128 1163548 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 14:10:21.088187 1163548 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 14:10:21.088262 1163548 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 14:10:21.088376 1163548 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 14:10:21.101138 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 14:10:21.129924 1163548 start.go:296] duration metric: took 140.601676ms for postStartSetup
	I0318 14:10:21.129984 1163548 fix.go:56] duration metric: took 20.076494737s for fixHost
	I0318 14:10:21.130015 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:21.133278 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:21.133712 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:21.133745 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:21.133935 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:21.134201 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:21.134413 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:21.134599 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:21.134799 1163548 main.go:141] libmachine: Using SSH client type: native
	I0318 14:10:21.135013 1163548 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.13 22 <nil> <nil>}
	I0318 14:10:21.135028 1163548 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 14:10:21.250202 1163548 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710771021.225463311
	
	I0318 14:10:21.250235 1163548 fix.go:216] guest clock: 1710771021.225463311
	I0318 14:10:21.250246 1163548 fix.go:229] Guest: 2024-03-18 14:10:21.225463311 +0000 UTC Remote: 2024-03-18 14:10:21.12999126 +0000 UTC m=+20.255578157 (delta=95.472051ms)
	I0318 14:10:21.250273 1163548 fix.go:200] guest clock delta is within tolerance: 95.472051ms
	I0318 14:10:21.250280 1163548 start.go:83] releasing machines lock for "newest-cni-572909", held for 20.196825447s
	I0318 14:10:21.250306 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:21.250585 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetIP
	I0318 14:10:21.253815 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:21.254225 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:21.254250 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:21.254413 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:21.255081 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:21.255286 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:21.255432 1163548 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 14:10:21.255483 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:21.255628 1163548 ssh_runner.go:195] Run: cat /version.json
	I0318 14:10:21.255649 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:21.258418 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:21.258721 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:21.258807 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:21.258835 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:21.259043 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:21.259140 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:21.259186 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:21.259241 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:21.259434 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:21.259471 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:21.259612 1163548 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:10:21.259634 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:21.259825 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:21.259956 1163548 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:10:21.350839 1163548 ssh_runner.go:195] Run: systemctl --version
	I0318 14:10:21.371927 1163548 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 14:10:21.527547 1163548 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 14:10:21.535650 1163548 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 14:10:21.535726 1163548 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 14:10:21.554452 1163548 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 14:10:21.554483 1163548 start.go:494] detecting cgroup driver to use...
	I0318 14:10:21.554549 1163548 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 14:10:21.574923 1163548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 14:10:21.590858 1163548 docker.go:217] disabling cri-docker service (if available) ...
	I0318 14:10:21.590934 1163548 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 14:10:21.607224 1163548 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 14:10:21.624095 1163548 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 14:10:21.768250 1163548 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 14:10:21.919183 1163548 docker.go:233] disabling docker service ...
	I0318 14:10:21.919268 1163548 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 14:10:21.937814 1163548 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 14:10:21.954771 1163548 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 14:10:22.121230 1163548 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 14:10:22.274938 1163548 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 14:10:22.292136 1163548 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 14:10:22.314209 1163548 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 14:10:22.314284 1163548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:10:22.327813 1163548 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 14:10:22.327892 1163548 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:10:22.340555 1163548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:10:22.353320 1163548 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 14:10:22.365683 1163548 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 14:10:22.378663 1163548 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 14:10:22.390233 1163548 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 14:10:22.390300 1163548 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 14:10:22.406972 1163548 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 14:10:22.419305 1163548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:10:22.553811 1163548 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 14:10:22.713177 1163548 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 14:10:22.713304 1163548 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 14:10:22.719041 1163548 start.go:562] Will wait 60s for crictl version
	I0318 14:10:22.719118 1163548 ssh_runner.go:195] Run: which crictl
	I0318 14:10:22.723877 1163548 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 14:10:22.782729 1163548 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 14:10:22.782843 1163548 ssh_runner.go:195] Run: crio --version
	I0318 14:10:22.814329 1163548 ssh_runner.go:195] Run: crio --version
	I0318 14:10:22.854818 1163548 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 14:10:22.856349 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetIP
	I0318 14:10:22.858896 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:22.859308 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:22.859357 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:22.859619 1163548 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 14:10:22.864746 1163548 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:10:22.880663 1163548 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0318 14:10:21.626684 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:22.127624 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:22.627313 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:23.127464 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:23.627128 1163134 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 14:10:23.783005 1163134 kubeadm.go:1107] duration metric: took 13.008590145s to wait for elevateKubeSystemPrivileges
	W0318 14:10:23.783054 1163134 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 14:10:23.783067 1163134 kubeadm.go:393] duration metric: took 24.518167139s to StartCluster
	I0318 14:10:23.783089 1163134 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:10:23.783185 1163134 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 14:10:23.785703 1163134 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:10:23.786023 1163134 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.39.123 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:10:23.786060 1163134 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0318 14:10:23.786068 1163134 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:10:23.788387 1163134 out.go:177] * Verifying Kubernetes components...
	I0318 14:10:23.786176 1163134 addons.go:69] Setting storage-provisioner=true in profile "auto-990886"
	I0318 14:10:23.786179 1163134 addons.go:69] Setting default-storageclass=true in profile "auto-990886"
	I0318 14:10:23.786296 1163134 config.go:182] Loaded profile config "auto-990886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 14:10:23.789979 1163134 addons.go:234] Setting addon storage-provisioner=true in "auto-990886"
	I0318 14:10:23.790001 1163134 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:10:23.790010 1163134 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-990886"
	I0318 14:10:23.790031 1163134 host.go:66] Checking if "auto-990886" exists ...
	I0318 14:10:23.790528 1163134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:23.790545 1163134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:23.790571 1163134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:23.790583 1163134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:23.811809 1163134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36363
	I0318 14:10:23.811874 1163134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35719
	I0318 14:10:23.812467 1163134 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:23.812541 1163134 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:23.813085 1163134 main.go:141] libmachine: Using API Version  1
	I0318 14:10:23.813104 1163134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:23.813232 1163134 main.go:141] libmachine: Using API Version  1
	I0318 14:10:23.813250 1163134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:23.813522 1163134 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:23.813623 1163134 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:23.813736 1163134 main.go:141] libmachine: (auto-990886) Calling .GetState
	I0318 14:10:23.814641 1163134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:23.814681 1163134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:23.817424 1163134 addons.go:234] Setting addon default-storageclass=true in "auto-990886"
	I0318 14:10:23.817466 1163134 host.go:66] Checking if "auto-990886" exists ...
	I0318 14:10:23.817746 1163134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:23.817771 1163134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:23.836227 1163134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45641
	I0318 14:10:23.837389 1163134 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:23.837950 1163134 main.go:141] libmachine: Using API Version  1
	I0318 14:10:23.837972 1163134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:23.838402 1163134 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:23.838584 1163134 main.go:141] libmachine: (auto-990886) Calling .GetState
	I0318 14:10:23.840405 1163134 main.go:141] libmachine: (auto-990886) Calling .DriverName
	I0318 14:10:23.842492 1163134 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:10:22.881993 1163548 kubeadm.go:877] updating cluster {Name:newest-cni-572909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-572909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHos
tTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 14:10:22.882145 1163548 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 14:10:22.882226 1163548 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:10:22.925132 1163548 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 14:10:22.925225 1163548 ssh_runner.go:195] Run: which lz4
	I0318 14:10:22.930301 1163548 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 14:10:22.936942 1163548 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 14:10:22.936973 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0318 14:10:24.836684 1163548 crio.go:444] duration metric: took 1.906423702s to copy over tarball
	I0318 14:10:24.836775 1163548 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 14:10:23.844021 1163134 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:10:23.844041 1163134 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:10:23.844063 1163134 main.go:141] libmachine: (auto-990886) Calling .GetSSHHostname
	I0318 14:10:23.845413 1163134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41787
	I0318 14:10:23.845904 1163134 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:23.846516 1163134 main.go:141] libmachine: Using API Version  1
	I0318 14:10:23.846537 1163134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:23.847103 1163134 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:23.848157 1163134 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:23.848407 1163134 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:23.848939 1163134 main.go:141] libmachine: (auto-990886) DBG | domain auto-990886 has defined MAC address 52:54:00:a2:e8:b5 in network mk-auto-990886
	I0318 14:10:23.849384 1163134 main.go:141] libmachine: (auto-990886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:e8:b5", ip: ""} in network mk-auto-990886: {Iface:virbr1 ExpiryTime:2024-03-18 15:09:42 +0000 UTC Type:0 Mac:52:54:00:a2:e8:b5 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:auto-990886 Clientid:01:52:54:00:a2:e8:b5}
	I0318 14:10:23.849459 1163134 main.go:141] libmachine: (auto-990886) DBG | domain auto-990886 has defined IP address 192.168.39.123 and MAC address 52:54:00:a2:e8:b5 in network mk-auto-990886
	I0318 14:10:23.849677 1163134 main.go:141] libmachine: (auto-990886) Calling .GetSSHPort
	I0318 14:10:23.849868 1163134 main.go:141] libmachine: (auto-990886) Calling .GetSSHKeyPath
	I0318 14:10:23.850014 1163134 main.go:141] libmachine: (auto-990886) Calling .GetSSHUsername
	I0318 14:10:23.850167 1163134 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/auto-990886/id_rsa Username:docker}
	I0318 14:10:23.875974 1163134 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36937
	I0318 14:10:23.876537 1163134 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:23.877322 1163134 main.go:141] libmachine: Using API Version  1
	I0318 14:10:23.877358 1163134 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:23.877807 1163134 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:23.878045 1163134 main.go:141] libmachine: (auto-990886) Calling .GetState
	I0318 14:10:23.879848 1163134 main.go:141] libmachine: (auto-990886) Calling .DriverName
	I0318 14:10:23.880141 1163134 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:10:23.880159 1163134 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:10:23.880181 1163134 main.go:141] libmachine: (auto-990886) Calling .GetSSHHostname
	I0318 14:10:23.883446 1163134 main.go:141] libmachine: (auto-990886) DBG | domain auto-990886 has defined MAC address 52:54:00:a2:e8:b5 in network mk-auto-990886
	I0318 14:10:23.883860 1163134 main.go:141] libmachine: (auto-990886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:e8:b5", ip: ""} in network mk-auto-990886: {Iface:virbr1 ExpiryTime:2024-03-18 15:09:42 +0000 UTC Type:0 Mac:52:54:00:a2:e8:b5 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:auto-990886 Clientid:01:52:54:00:a2:e8:b5}
	I0318 14:10:23.883880 1163134 main.go:141] libmachine: (auto-990886) DBG | domain auto-990886 has defined IP address 192.168.39.123 and MAC address 52:54:00:a2:e8:b5 in network mk-auto-990886
	I0318 14:10:23.884157 1163134 main.go:141] libmachine: (auto-990886) Calling .GetSSHPort
	I0318 14:10:23.884346 1163134 main.go:141] libmachine: (auto-990886) Calling .GetSSHKeyPath
	I0318 14:10:23.884537 1163134 main.go:141] libmachine: (auto-990886) Calling .GetSSHUsername
	I0318 14:10:23.884663 1163134 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/auto-990886/id_rsa Username:docker}
	I0318 14:10:24.050381 1163134 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0318 14:10:24.086196 1163134 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:10:24.226894 1163134 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:10:24.364643 1163134 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:10:26.157298 1163134 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.106866318s)
	I0318 14:10:26.157328 1163134 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.07108959s)
	I0318 14:10:26.157338 1163134 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0318 14:10:26.159134 1163134 node_ready.go:35] waiting up to 15m0s for node "auto-990886" to be "Ready" ...
	I0318 14:10:26.188567 1163134 node_ready.go:49] node "auto-990886" has status "Ready":"True"
	I0318 14:10:26.188597 1163134 node_ready.go:38] duration metric: took 29.435539ms for node "auto-990886" to be "Ready" ...
	I0318 14:10:26.188609 1163134 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 14:10:26.200121 1163134 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace to be "Ready" ...
	I0318 14:10:26.465323 1163134 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.100616209s)
	I0318 14:10:26.465407 1163134 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:26.465424 1163134 main.go:141] libmachine: (auto-990886) Calling .Close
	I0318 14:10:26.465492 1163134 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.238555138s)
	I0318 14:10:26.465544 1163134 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:26.465560 1163134 main.go:141] libmachine: (auto-990886) Calling .Close
	I0318 14:10:26.465912 1163134 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:26.465932 1163134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:26.465940 1163134 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:26.465957 1163134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:26.465960 1163134 main.go:141] libmachine: (auto-990886) DBG | Closing plugin on server side
	I0318 14:10:26.465966 1163134 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:26.465979 1163134 main.go:141] libmachine: (auto-990886) Calling .Close
	I0318 14:10:26.465946 1163134 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:26.466013 1163134 main.go:141] libmachine: (auto-990886) Calling .Close
	I0318 14:10:26.466209 1163134 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:26.466228 1163134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:26.466297 1163134 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:26.466309 1163134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:26.466296 1163134 main.go:141] libmachine: (auto-990886) DBG | Closing plugin on server side
	I0318 14:10:26.487138 1163134 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:26.487164 1163134 main.go:141] libmachine: (auto-990886) Calling .Close
	I0318 14:10:26.487515 1163134 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:26.487538 1163134 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:26.489446 1163134 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0318 14:10:27.820969 1163548 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.984161509s)
	I0318 14:10:27.820997 1163548 crio.go:451] duration metric: took 2.984278293s to extract the tarball
	I0318 14:10:27.821004 1163548 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 14:10:27.863155 1163548 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 14:10:27.913081 1163548 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 14:10:27.913112 1163548 cache_images.go:84] Images are preloaded, skipping loading
	I0318 14:10:27.913121 1163548 kubeadm.go:928] updating node { 192.168.72.13 8443 v1.29.0-rc.2 crio true true} ...
	I0318 14:10:27.913254 1163548 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-572909 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.13
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-572909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 14:10:27.913339 1163548 ssh_runner.go:195] Run: crio config
	I0318 14:10:27.978530 1163548 cni.go:84] Creating CNI manager for ""
	I0318 14:10:27.978563 1163548 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:10:27.978580 1163548 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0318 14:10:27.978608 1163548 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.72.13 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-572909 NodeName:newest-cni-572909 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.13"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs
:map[] NodeIP:192.168.72.13 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 14:10:27.978800 1163548 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.13
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-572909"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.13
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.13"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 14:10:27.978885 1163548 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 14:10:27.993178 1163548 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 14:10:27.993287 1163548 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 14:10:28.006983 1163548 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (358 bytes)
	I0318 14:10:28.028085 1163548 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 14:10:28.048827 1163548 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I0318 14:10:28.068903 1163548 ssh_runner.go:195] Run: grep 192.168.72.13	control-plane.minikube.internal$ /etc/hosts
	I0318 14:10:28.074012 1163548 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.13	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 14:10:28.090427 1163548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:10:28.227314 1163548 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:10:28.250562 1163548 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909 for IP: 192.168.72.13
	I0318 14:10:28.250595 1163548 certs.go:194] generating shared ca certs ...
	I0318 14:10:28.250618 1163548 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:10:28.250792 1163548 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 14:10:28.250836 1163548 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 14:10:28.250843 1163548 certs.go:256] generating profile certs ...
	I0318 14:10:28.250925 1163548 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/client.key
	I0318 14:10:28.490785 1163548 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.key.3b943828
	I0318 14:10:28.490890 1163548 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/proxy-client.key
	I0318 14:10:28.491048 1163548 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 14:10:28.491088 1163548 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 14:10:28.491102 1163548 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 14:10:28.491135 1163548 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 14:10:28.491171 1163548 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 14:10:28.491205 1163548 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 14:10:28.491258 1163548 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 14:10:28.492148 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 14:10:28.523705 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 14:10:28.559503 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 14:10:28.589661 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 14:10:28.620373 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 14:10:28.649425 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 14:10:28.680646 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 14:10:28.710601 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/newest-cni-572909/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 14:10:28.739183 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 14:10:28.771095 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 14:10:28.801204 1163548 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 14:10:28.830235 1163548 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 14:10:28.852506 1163548 ssh_runner.go:195] Run: openssl version
	I0318 14:10:28.859927 1163548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 14:10:28.875187 1163548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:10:28.881118 1163548 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:10:28.881195 1163548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 14:10:28.888345 1163548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 14:10:28.901355 1163548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 14:10:28.914836 1163548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 14:10:28.920332 1163548 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 14:10:28.920406 1163548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 14:10:28.927198 1163548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 14:10:28.941055 1163548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 14:10:28.954303 1163548 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 14:10:28.959767 1163548 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 14:10:28.959845 1163548 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 14:10:28.966429 1163548 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 14:10:28.979770 1163548 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 14:10:28.985181 1163548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 14:10:28.992693 1163548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 14:10:29.001075 1163548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 14:10:29.008343 1163548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 14:10:29.015013 1163548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 14:10:29.021741 1163548 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 14:10:29.028121 1163548 kubeadm.go:391] StartCluster: {Name:newest-cni-572909 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-572909 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTi
meout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 14:10:29.028238 1163548 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 14:10:29.028287 1163548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:10:29.090238 1163548 cri.go:89] found id: ""
	I0318 14:10:29.090338 1163548 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 14:10:29.102803 1163548 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 14:10:29.102830 1163548 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 14:10:29.102854 1163548 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 14:10:29.102935 1163548 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 14:10:29.115035 1163548 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 14:10:29.116579 1163548 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-572909" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 14:10:29.117671 1163548 kubeconfig.go:62] /home/jenkins/minikube-integration/18429-1106816/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-572909" cluster setting kubeconfig missing "newest-cni-572909" context setting]
	I0318 14:10:29.119208 1163548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:10:29.164741 1163548 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 14:10:29.177314 1163548 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.13
	I0318 14:10:29.177350 1163548 kubeadm.go:1154] stopping kube-system containers ...
	I0318 14:10:29.177363 1163548 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 14:10:29.177421 1163548 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 14:10:29.229379 1163548 cri.go:89] found id: ""
	I0318 14:10:29.229534 1163548 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 14:10:29.251015 1163548 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 14:10:29.264283 1163548 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 14:10:29.264312 1163548 kubeadm.go:156] found existing configuration files:
	
	I0318 14:10:29.264384 1163548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 14:10:29.276398 1163548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 14:10:29.276477 1163548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 14:10:29.290897 1163548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 14:10:29.304101 1163548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 14:10:29.304181 1163548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 14:10:29.316533 1163548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 14:10:29.328490 1163548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 14:10:29.328547 1163548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 14:10:29.340310 1163548 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 14:10:29.351902 1163548 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 14:10:29.351982 1163548 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 14:10:29.364392 1163548 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 14:10:29.376601 1163548 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:10:29.518896 1163548 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:10:30.496164 1163548 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:10:30.738457 1163548 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:10:30.833306 1163548 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:10:26.490899 1163134 addons.go:505] duration metric: took 2.704835496s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0318 14:10:26.662735 1163134 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-990886" context rescaled to 1 replicas
	I0318 14:10:29.748109 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	I0318 14:10:30.957596 1163548 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:10:30.957709 1163548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:10:31.458185 1163548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:10:31.957892 1163548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:10:31.993525 1163548 api_server.go:72] duration metric: took 1.035928703s to wait for apiserver process to appear ...
	I0318 14:10:31.993566 1163548 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:10:31.993590 1163548 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I0318 14:10:31.994348 1163548 api_server.go:269] stopped: https://192.168.72.13:8443/healthz: Get "https://192.168.72.13:8443/healthz": dial tcp 192.168.72.13:8443: connect: connection refused
	I0318 14:10:32.494045 1163548 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I0318 14:10:35.482801 1163548 api_server.go:279] https://192.168.72.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:10:35.482836 1163548 api_server.go:103] status: https://192.168.72.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:10:35.482856 1163548 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I0318 14:10:35.499513 1163548 api_server.go:279] https://192.168.72.13:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 14:10:35.499546 1163548 api_server.go:103] status: https://192.168.72.13:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 14:10:35.499562 1163548 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I0318 14:10:35.604478 1163548 api_server.go:279] https://192.168.72.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:10:35.604521 1163548 api_server.go:103] status: https://192.168.72.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:10:32.208439 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	I0318 14:10:34.208684 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	I0318 14:10:36.210824 1163134 pod_ready.go:102] pod "coredns-5dd5756b68-rvgmj" in "kube-system" namespace has status "Ready":"False"
	I0318 14:10:35.993807 1163548 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I0318 14:10:36.000257 1163548 api_server.go:279] https://192.168.72.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:10:36.000286 1163548 api_server.go:103] status: https://192.168.72.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:10:36.494279 1163548 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I0318 14:10:36.506562 1163548 api_server.go:279] https://192.168.72.13:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 14:10:36.506597 1163548 api_server.go:103] status: https://192.168.72.13:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 14:10:36.994125 1163548 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I0318 14:10:36.999281 1163548 api_server.go:279] https://192.168.72.13:8443/healthz returned 200:
	ok
	I0318 14:10:37.007355 1163548 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:10:37.007387 1163548 api_server.go:131] duration metric: took 5.013813584s to wait for apiserver health ...
	I0318 14:10:37.007396 1163548 cni.go:84] Creating CNI manager for ""
	I0318 14:10:37.007402 1163548 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 14:10:37.009375 1163548 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 14:10:37.010729 1163548 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 14:10:37.028048 1163548 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 14:10:37.071965 1163548 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:10:37.091841 1163548 system_pods.go:59] 8 kube-system pods found
	I0318 14:10:37.091889 1163548 system_pods.go:61] "coredns-76f75df574-vm2x6" [8fe4db6d-755f-41c8-ab1d-1d1b27809fb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:10:37.091904 1163548 system_pods.go:61] "etcd-newest-cni-572909" [7201b304-d725-47d4-a207-bd33583db48f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:10:37.091920 1163548 system_pods.go:61] "kube-apiserver-newest-cni-572909" [657b614b-3961-4fc7-b1e6-c0f080c720ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:10:37.091929 1163548 system_pods.go:61] "kube-controller-manager-newest-cni-572909" [d6cfa491-ebeb-430b-abaf-8f36c4cc96e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:10:37.091937 1163548 system_pods.go:61] "kube-proxy-jw9cr" [2ba2fb0e-c79b-4e58-943c-946340b45614] Running
	I0318 14:10:37.091945 1163548 system_pods.go:61] "kube-scheduler-newest-cni-572909" [bd3a3126-181c-41a4-aa3f-f74eed37bbab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:10:37.091962 1163548 system_pods.go:61] "metrics-server-57f55c9bc5-rlgtg" [70901719-fdc7-43b1-8ad0-8c4f3fdea727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:10:37.091976 1163548 system_pods.go:61] "storage-provisioner" [f93d5498-271f-4979-b70e-8f2607f338a7] Running
	I0318 14:10:37.091986 1163548 system_pods.go:74] duration metric: took 19.992508ms to wait for pod list to return data ...
	I0318 14:10:37.091995 1163548 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:10:37.096856 1163548 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:10:37.096888 1163548 node_conditions.go:123] node cpu capacity is 2
	I0318 14:10:37.096900 1163548 node_conditions.go:105] duration metric: took 4.895976ms to run NodePressure ...
	I0318 14:10:37.096920 1163548 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 14:10:37.383945 1163548 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 14:10:37.397300 1163548 ops.go:34] apiserver oom_adj: -16
	I0318 14:10:37.397323 1163548 kubeadm.go:591] duration metric: took 8.29446176s to restartPrimaryControlPlane
	I0318 14:10:37.397334 1163548 kubeadm.go:393] duration metric: took 8.369221923s to StartCluster
	I0318 14:10:37.397356 1163548 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:10:37.397452 1163548 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 14:10:37.399556 1163548 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 14:10:37.399870 1163548 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.13 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 14:10:37.401734 1163548 out.go:177] * Verifying Kubernetes components...
	I0318 14:10:37.399999 1163548 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 14:10:37.400138 1163548 config.go:182] Loaded profile config "newest-cni-572909": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 14:10:37.403112 1163548 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-572909"
	I0318 14:10:37.403125 1163548 addons.go:69] Setting dashboard=true in profile "newest-cni-572909"
	I0318 14:10:37.403174 1163548 addons.go:234] Setting addon dashboard=true in "newest-cni-572909"
	I0318 14:10:37.403176 1163548 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-572909"
	W0318 14:10:37.403186 1163548 addons.go:243] addon dashboard should already be in state true
	W0318 14:10:37.403191 1163548 addons.go:243] addon storage-provisioner should already be in state true
	I0318 14:10:37.403216 1163548 host.go:66] Checking if "newest-cni-572909" exists ...
	I0318 14:10:37.403225 1163548 host.go:66] Checking if "newest-cni-572909" exists ...
	I0318 14:10:37.403113 1163548 addons.go:69] Setting default-storageclass=true in profile "newest-cni-572909"
	I0318 14:10:37.403256 1163548 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-572909"
	I0318 14:10:37.403125 1163548 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 14:10:37.403127 1163548 addons.go:69] Setting metrics-server=true in profile "newest-cni-572909"
	I0318 14:10:37.403535 1163548 addons.go:234] Setting addon metrics-server=true in "newest-cni-572909"
	W0318 14:10:37.403551 1163548 addons.go:243] addon metrics-server should already be in state true
	I0318 14:10:37.403598 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:37.403611 1163548 host.go:66] Checking if "newest-cni-572909" exists ...
	I0318 14:10:37.403629 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:37.403658 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:37.403693 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:37.403647 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:37.403761 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:37.403994 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:37.404034 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:37.422565 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41237
	I0318 14:10:37.424043 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39735
	I0318 14:10:37.424196 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33777
	I0318 14:10:37.424432 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:37.424528 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0318 14:10:37.424590 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:37.424662 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:37.424937 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:37.424962 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:37.425094 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:37.425127 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:37.425146 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:37.425224 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:37.425248 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:37.425457 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:37.425541 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:37.425595 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:37.425720 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:37.425735 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:37.426058 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:37.426093 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:37.426189 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:37.426219 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:37.426288 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:37.426303 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:37.426355 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:37.426552 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetState
	I0318 14:10:37.430678 1163548 addons.go:234] Setting addon default-storageclass=true in "newest-cni-572909"
	W0318 14:10:37.430702 1163548 addons.go:243] addon default-storageclass should already be in state true
	I0318 14:10:37.430731 1163548 host.go:66] Checking if "newest-cni-572909" exists ...
	I0318 14:10:37.431127 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:37.431167 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:37.444828 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32881
	I0318 14:10:37.445368 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:37.445926 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:37.445951 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:37.446294 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:37.446510 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetState
	I0318 14:10:37.447101 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38295
	I0318 14:10:37.447722 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:37.448264 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:37.448286 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:37.448957 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:37.449102 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetState
	I0318 14:10:37.449142 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:37.451368 1163548 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0318 14:10:37.451103 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:37.452213 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35975
	I0318 14:10:37.453055 1163548 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0318 14:10:37.454474 1163548 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0318 14:10:37.454495 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0318 14:10:37.454516 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:37.456035 1163548 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 14:10:37.453566 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:37.455566 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44395
	I0318 14:10:37.457409 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:37.457693 1163548 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:10:37.457713 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 14:10:37.457732 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:37.458024 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:37.458076 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:37.458091 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:37.458187 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:37.458490 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:37.458513 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:37.458533 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:37.458691 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:37.458676 1163548 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:10:37.459160 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:37.459192 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:37.459217 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:37.459645 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:37.459888 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetState
	I0318 14:10:37.459918 1163548 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 14:10:37.459957 1163548 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 14:10:37.460969 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:37.461430 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:37.461457 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:37.461854 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:37.461896 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:37.462010 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:37.463662 1163548 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 14:10:37.462290 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:37.465066 1163548 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 14:10:37.465091 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 14:10:37.465111 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:37.465162 1163548 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:10:37.467685 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:37.468046 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:37.468074 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:37.468196 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:37.468397 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:37.468615 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:37.468759 1163548 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:10:37.478865 1163548 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35769
	I0318 14:10:37.479311 1163548 main.go:141] libmachine: () Calling .GetVersion
	I0318 14:10:37.479991 1163548 main.go:141] libmachine: Using API Version  1
	I0318 14:10:37.480022 1163548 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 14:10:37.480623 1163548 main.go:141] libmachine: () Calling .GetMachineName
	I0318 14:10:37.480832 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetState
	I0318 14:10:37.482266 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .DriverName
	I0318 14:10:37.482584 1163548 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 14:10:37.482604 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 14:10:37.482622 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHHostname
	I0318 14:10:37.486056 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:37.486413 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a2:ca:ad", ip: ""} in network mk-newest-cni-572909: {Iface:virbr4 ExpiryTime:2024-03-18 15:10:14 +0000 UTC Type:0 Mac:52:54:00:a2:ca:ad Iaid: IPaddr:192.168.72.13 Prefix:24 Hostname:newest-cni-572909 Clientid:01:52:54:00:a2:ca:ad}
	I0318 14:10:37.486451 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | domain newest-cni-572909 has defined IP address 192.168.72.13 and MAC address 52:54:00:a2:ca:ad in network mk-newest-cni-572909
	I0318 14:10:37.486671 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHPort
	I0318 14:10:37.486831 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHKeyPath
	I0318 14:10:37.486959 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .GetSSHUsername
	I0318 14:10:37.487070 1163548 sshutil.go:53] new ssh client: &{IP:192.168.72.13 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/newest-cni-572909/id_rsa Username:docker}
	I0318 14:10:37.644763 1163548 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 14:10:37.663876 1163548 api_server.go:52] waiting for apiserver process to appear ...
	I0318 14:10:37.663980 1163548 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 14:10:37.679938 1163548 api_server.go:72] duration metric: took 280.024395ms to wait for apiserver process to appear ...
	I0318 14:10:37.679966 1163548 api_server.go:88] waiting for apiserver healthz status ...
	I0318 14:10:37.679988 1163548 api_server.go:253] Checking apiserver healthz at https://192.168.72.13:8443/healthz ...
	I0318 14:10:37.686040 1163548 api_server.go:279] https://192.168.72.13:8443/healthz returned 200:
	ok
	I0318 14:10:37.691114 1163548 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 14:10:37.691144 1163548 api_server.go:131] duration metric: took 11.169372ms to wait for apiserver health ...
	I0318 14:10:37.691155 1163548 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 14:10:37.698384 1163548 system_pods.go:59] 8 kube-system pods found
	I0318 14:10:37.698417 1163548 system_pods.go:61] "coredns-76f75df574-vm2x6" [8fe4db6d-755f-41c8-ab1d-1d1b27809fb4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 14:10:37.698427 1163548 system_pods.go:61] "etcd-newest-cni-572909" [7201b304-d725-47d4-a207-bd33583db48f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 14:10:37.698440 1163548 system_pods.go:61] "kube-apiserver-newest-cni-572909" [657b614b-3961-4fc7-b1e6-c0f080c720ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 14:10:37.698449 1163548 system_pods.go:61] "kube-controller-manager-newest-cni-572909" [d6cfa491-ebeb-430b-abaf-8f36c4cc96e4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 14:10:37.698455 1163548 system_pods.go:61] "kube-proxy-jw9cr" [2ba2fb0e-c79b-4e58-943c-946340b45614] Running
	I0318 14:10:37.698463 1163548 system_pods.go:61] "kube-scheduler-newest-cni-572909" [bd3a3126-181c-41a4-aa3f-f74eed37bbab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 14:10:37.698475 1163548 system_pods.go:61] "metrics-server-57f55c9bc5-rlgtg" [70901719-fdc7-43b1-8ad0-8c4f3fdea727] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 14:10:37.698481 1163548 system_pods.go:61] "storage-provisioner" [f93d5498-271f-4979-b70e-8f2607f338a7] Running
	I0318 14:10:37.698492 1163548 system_pods.go:74] duration metric: took 7.328929ms to wait for pod list to return data ...
	I0318 14:10:37.698501 1163548 default_sa.go:34] waiting for default service account to be created ...
	I0318 14:10:37.704637 1163548 default_sa.go:45] found service account: "default"
	I0318 14:10:37.704669 1163548 default_sa.go:55] duration metric: took 6.157397ms for default service account to be created ...
	I0318 14:10:37.704683 1163548 kubeadm.go:576] duration metric: took 304.774063ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0318 14:10:37.704707 1163548 node_conditions.go:102] verifying NodePressure condition ...
	I0318 14:10:37.708632 1163548 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 14:10:37.708658 1163548 node_conditions.go:123] node cpu capacity is 2
	I0318 14:10:37.708672 1163548 node_conditions.go:105] duration metric: took 3.958829ms to run NodePressure ...
	I0318 14:10:37.708687 1163548 start.go:240] waiting for startup goroutines ...
	I0318 14:10:37.759965 1163548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 14:10:37.765014 1163548 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 14:10:37.765035 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 14:10:37.793731 1163548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 14:10:37.827376 1163548 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 14:10:37.827408 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 14:10:37.899641 1163548 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0318 14:10:37.899679 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0318 14:10:37.923778 1163548 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:10:37.923811 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 14:10:37.972208 1163548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 14:10:37.974130 1163548 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0318 14:10:37.974157 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0318 14:10:38.099846 1163548 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0318 14:10:38.099880 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0318 14:10:38.173899 1163548 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0318 14:10:38.173926 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0318 14:10:38.235721 1163548 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0318 14:10:38.235750 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0318 14:10:38.276859 1163548 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0318 14:10:38.276892 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0318 14:10:38.304706 1163548 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0318 14:10:38.304733 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0318 14:10:38.352970 1163548 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0318 14:10:38.352999 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0318 14:10:38.385615 1163548 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0318 14:10:38.385646 1163548 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0318 14:10:38.437716 1163548 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0318 14:10:39.505554 1163548 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.74554448s)
	I0318 14:10:39.505620 1163548 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:39.505619 1163548 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.711853904s)
	I0318 14:10:39.505655 1163548 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:39.505671 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .Close
	I0318 14:10:39.505633 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .Close
	I0318 14:10:39.505736 1163548 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.533489565s)
	I0318 14:10:39.505785 1163548 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:39.505800 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .Close
	I0318 14:10:39.507613 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | Closing plugin on server side
	I0318 14:10:39.507615 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | Closing plugin on server side
	I0318 14:10:39.507618 1163548 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:39.507637 1163548 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:39.507641 1163548 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:39.507646 1163548 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:39.507649 1163548 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:39.507652 1163548 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:39.507656 1163548 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:39.507659 1163548 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:39.507665 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .Close
	I0318 14:10:39.507668 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .Close
	I0318 14:10:39.507620 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | Closing plugin on server side
	I0318 14:10:39.507660 1163548 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:39.507739 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .Close
	I0318 14:10:39.509673 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | Closing plugin on server side
	I0318 14:10:39.509673 1163548 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:39.509696 1163548 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:39.509703 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | Closing plugin on server side
	I0318 14:10:39.509711 1163548 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:39.509713 1163548 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:39.509730 1163548 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:39.509730 1163548 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:39.509739 1163548 addons.go:470] Verifying addon metrics-server=true in "newest-cni-572909"
	I0318 14:10:39.509744 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | Closing plugin on server side
	I0318 14:10:39.517924 1163548 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:39.517955 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .Close
	I0318 14:10:39.518249 1163548 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:39.518279 1163548 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:39.518282 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | Closing plugin on server side
	I0318 14:10:39.960906 1163548 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.523135193s)
	I0318 14:10:39.960962 1163548 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:39.960976 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .Close
	I0318 14:10:39.961337 1163548 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:39.961355 1163548 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:39.961365 1163548 main.go:141] libmachine: Making call to close driver server
	I0318 14:10:39.961374 1163548 main.go:141] libmachine: (newest-cni-572909) Calling .Close
	I0318 14:10:39.961741 1163548 main.go:141] libmachine: (newest-cni-572909) DBG | Closing plugin on server side
	I0318 14:10:39.961783 1163548 main.go:141] libmachine: Successfully made call to close driver server
	I0318 14:10:39.961803 1163548 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 14:10:39.963412 1163548 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-572909 addons enable metrics-server
	
	I0318 14:10:39.964660 1163548 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	
	
	==> CRI-O <==
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.428112000Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710771040428084785,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f7f142ff-2277-4133-a9b1-5c9fe881508e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.429125649Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=15d485e7-dca6-4980-a922-00fc600101fa name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.429203393Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=15d485e7-dca6-4980-a922-00fc600101fa name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.429389133Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e02b6a06cd9de57f8956e47d537c1911e3b686e4fe2f89b4cc0c330ec1395f50,PodSandboxId:8598ab81f3a8427b711e7a1eb9665291041e829e7396d2cead720e12bc10d1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770152507618853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37883b5-9db5-467e-9b91-40f6ea69c18e,},Annotations:map[string]string{io.kubernetes.container.hash: 95705045,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5173e72f5aab4110f03ee50da1976455c04643d418d54191acae86add65becd3,PodSandboxId:23e47667ab7843fb87da468633568353cbb824230b0d84cb4dd962b3abb2b486,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150779901140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6dw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03d9bbe-1493-44a4-be19-1e387ff6eaef,},Annotations:map[string]string{io.kubernetes.container.hash: a8b3ec08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4097ab56fa719a27b89134e102ccb754f1f194d20353b3108c29a5582927375,PodSandboxId:6317fb6e686a84ddf5476c0c417b40126f5bf2c096ad0eb7a725f6f8aa5a68ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150676922684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ft594,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6e6863a-0b5e-434e-b13c-d33e9ed15007,},Annotations:map[string]string{io.kubernetes.container.hash: 44abca6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:022a13544265e467f4f462adc1005aa84022e97ff5734ca437819670e1af1bda,PodSandboxId:6726d05ea7e5c674d0fb21521976183b86f315ea995da3d20a353c1939ca0b95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710770150053954817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp9mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0,},Annotations:map[string]string{io.kubernetes.container.hash: 260715c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc0bc6c9c31812445f19787fe9f10103cd041c632e5f3945c5fe4cb587d6e1,PodSandboxId:f065d53892d2215a73430064e91abed3fa14787a99d4bfab559b65f20111bade,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770130629900843,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d43b88f75cc44c2f6b3982f84506c72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7d40b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7d442d079ffad19b9385885b4c5a7d9eedf130c90054839c5f08a863691d34,PodSandboxId:98404eeb33987d5af87d7be090feb2210fa93f68ce1621c8cf80a44bb678eccb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710770130575394174,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1fe59f7fd07c3ccedb94350e669b24c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c62f000548b89c9e7e7de7d9dc07271f9eef1a594a4a31410f4f8c52db6e80,PodSandboxId:4e716be2db37fa0e6e908365f559d89f328efdb578f03d419e35d47262b7f700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710770130500607549,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c3964d6ce26299f6adbb6721a7ed34,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9be9273c191dc4513928989b1472edb1147c687cd9e23e33206970ae0d9feb9,PodSandboxId:e080a747a34c0158245305db6f72fc50802b05b35b1f20830d7f758acecdb974,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710770130491047482,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884409b4f61232bbd76d8c1825cec4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 248f3412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=15d485e7-dca6-4980-a922-00fc600101fa name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.486398109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ff021fa3-7382-41f9-aa6c-39125c964865 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.486594342Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ff021fa3-7382-41f9-aa6c-39125c964865 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.488004334Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=14f45ad6-5472-4a9c-82c9-7802687b8678 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.488489185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710771040488465663,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=14f45ad6-5472-4a9c-82c9-7802687b8678 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.489037860Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=efe6f5fb-4348-4448-a6fe-6d8c87eeefef name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.489117619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=efe6f5fb-4348-4448-a6fe-6d8c87eeefef name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.489449684Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e02b6a06cd9de57f8956e47d537c1911e3b686e4fe2f89b4cc0c330ec1395f50,PodSandboxId:8598ab81f3a8427b711e7a1eb9665291041e829e7396d2cead720e12bc10d1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770152507618853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37883b5-9db5-467e-9b91-40f6ea69c18e,},Annotations:map[string]string{io.kubernetes.container.hash: 95705045,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5173e72f5aab4110f03ee50da1976455c04643d418d54191acae86add65becd3,PodSandboxId:23e47667ab7843fb87da468633568353cbb824230b0d84cb4dd962b3abb2b486,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150779901140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6dw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03d9bbe-1493-44a4-be19-1e387ff6eaef,},Annotations:map[string]string{io.kubernetes.container.hash: a8b3ec08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4097ab56fa719a27b89134e102ccb754f1f194d20353b3108c29a5582927375,PodSandboxId:6317fb6e686a84ddf5476c0c417b40126f5bf2c096ad0eb7a725f6f8aa5a68ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150676922684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ft594,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6e6863a-0b5e-434e-b13c-d33e9ed15007,},Annotations:map[string]string{io.kubernetes.container.hash: 44abca6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:022a13544265e467f4f462adc1005aa84022e97ff5734ca437819670e1af1bda,PodSandboxId:6726d05ea7e5c674d0fb21521976183b86f315ea995da3d20a353c1939ca0b95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710770150053954817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp9mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0,},Annotations:map[string]string{io.kubernetes.container.hash: 260715c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc0bc6c9c31812445f19787fe9f10103cd041c632e5f3945c5fe4cb587d6e1,PodSandboxId:f065d53892d2215a73430064e91abed3fa14787a99d4bfab559b65f20111bade,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770130629900843,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d43b88f75cc44c2f6b3982f84506c72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7d40b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7d442d079ffad19b9385885b4c5a7d9eedf130c90054839c5f08a863691d34,PodSandboxId:98404eeb33987d5af87d7be090feb2210fa93f68ce1621c8cf80a44bb678eccb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710770130575394174,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1fe59f7fd07c3ccedb94350e669b24c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c62f000548b89c9e7e7de7d9dc07271f9eef1a594a4a31410f4f8c52db6e80,PodSandboxId:4e716be2db37fa0e6e908365f559d89f328efdb578f03d419e35d47262b7f700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710770130500607549,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c3964d6ce26299f6adbb6721a7ed34,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9be9273c191dc4513928989b1472edb1147c687cd9e23e33206970ae0d9feb9,PodSandboxId:e080a747a34c0158245305db6f72fc50802b05b35b1f20830d7f758acecdb974,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710770130491047482,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884409b4f61232bbd76d8c1825cec4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 248f3412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=efe6f5fb-4348-4448-a6fe-6d8c87eeefef name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.534724212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fbb5bd02-0943-43c0-aa4f-afff297272ce name=/runtime.v1.RuntimeService/Version
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.534828956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fbb5bd02-0943-43c0-aa4f-afff297272ce name=/runtime.v1.RuntimeService/Version
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.536412250Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c9192966-92dd-4f5b-b5de-049f04e9df51 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.537231804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710771040537199250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c9192966-92dd-4f5b-b5de-049f04e9df51 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.537894410Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=17170f53-741b-432f-891a-d0c2fd14e51b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.538112742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=17170f53-741b-432f-891a-d0c2fd14e51b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.538331255Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e02b6a06cd9de57f8956e47d537c1911e3b686e4fe2f89b4cc0c330ec1395f50,PodSandboxId:8598ab81f3a8427b711e7a1eb9665291041e829e7396d2cead720e12bc10d1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770152507618853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37883b5-9db5-467e-9b91-40f6ea69c18e,},Annotations:map[string]string{io.kubernetes.container.hash: 95705045,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5173e72f5aab4110f03ee50da1976455c04643d418d54191acae86add65becd3,PodSandboxId:23e47667ab7843fb87da468633568353cbb824230b0d84cb4dd962b3abb2b486,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150779901140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6dw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03d9bbe-1493-44a4-be19-1e387ff6eaef,},Annotations:map[string]string{io.kubernetes.container.hash: a8b3ec08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4097ab56fa719a27b89134e102ccb754f1f194d20353b3108c29a5582927375,PodSandboxId:6317fb6e686a84ddf5476c0c417b40126f5bf2c096ad0eb7a725f6f8aa5a68ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150676922684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ft594,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6e6863a-0b5e-434e-b13c-d33e9ed15007,},Annotations:map[string]string{io.kubernetes.container.hash: 44abca6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:022a13544265e467f4f462adc1005aa84022e97ff5734ca437819670e1af1bda,PodSandboxId:6726d05ea7e5c674d0fb21521976183b86f315ea995da3d20a353c1939ca0b95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710770150053954817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp9mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0,},Annotations:map[string]string{io.kubernetes.container.hash: 260715c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc0bc6c9c31812445f19787fe9f10103cd041c632e5f3945c5fe4cb587d6e1,PodSandboxId:f065d53892d2215a73430064e91abed3fa14787a99d4bfab559b65f20111bade,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770130629900843,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d43b88f75cc44c2f6b3982f84506c72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7d40b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7d442d079ffad19b9385885b4c5a7d9eedf130c90054839c5f08a863691d34,PodSandboxId:98404eeb33987d5af87d7be090feb2210fa93f68ce1621c8cf80a44bb678eccb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710770130575394174,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1fe59f7fd07c3ccedb94350e669b24c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c62f000548b89c9e7e7de7d9dc07271f9eef1a594a4a31410f4f8c52db6e80,PodSandboxId:4e716be2db37fa0e6e908365f559d89f328efdb578f03d419e35d47262b7f700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710770130500607549,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c3964d6ce26299f6adbb6721a7ed34,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9be9273c191dc4513928989b1472edb1147c687cd9e23e33206970ae0d9feb9,PodSandboxId:e080a747a34c0158245305db6f72fc50802b05b35b1f20830d7f758acecdb974,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710770130491047482,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884409b4f61232bbd76d8c1825cec4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 248f3412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=17170f53-741b-432f-891a-d0c2fd14e51b name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.576916401Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce9beedf-ba58-4091-a753-71eed38f2006 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.577025309Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce9beedf-ba58-4091-a753-71eed38f2006 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.578669221Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fa2ab969-073b-43ca-adcc-b27cf27561b7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.579134737Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710771040579110903,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fa2ab969-073b-43ca-adcc-b27cf27561b7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.579642194Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81a8ea30-a521-4d52-9494-488352f9ead1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.579722084Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81a8ea30-a521-4d52-9494-488352f9ead1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:10:40 embed-certs-173036 crio[703]: time="2024-03-18 14:10:40.580123023Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e02b6a06cd9de57f8956e47d537c1911e3b686e4fe2f89b4cc0c330ec1395f50,PodSandboxId:8598ab81f3a8427b711e7a1eb9665291041e829e7396d2cead720e12bc10d1b1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710770152507618853,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a37883b5-9db5-467e-9b91-40f6ea69c18e,},Annotations:map[string]string{io.kubernetes.container.hash: 95705045,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5173e72f5aab4110f03ee50da1976455c04643d418d54191acae86add65becd3,PodSandboxId:23e47667ab7843fb87da468633568353cbb824230b0d84cb4dd962b3abb2b486,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150779901140,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6dw8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c03d9bbe-1493-44a4-be19-1e387ff6eaef,},Annotations:map[string]string{io.kubernetes.container.hash: a8b3ec08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4097ab56fa719a27b89134e102ccb754f1f194d20353b3108c29a5582927375,PodSandboxId:6317fb6e686a84ddf5476c0c417b40126f5bf2c096ad0eb7a725f6f8aa5a68ab,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710770150676922684,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ft594,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4
6e6863a-0b5e-434e-b13c-d33e9ed15007,},Annotations:map[string]string{io.kubernetes.container.hash: 44abca6c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:022a13544265e467f4f462adc1005aa84022e97ff5734ca437819670e1af1bda,PodSandboxId:6726d05ea7e5c674d0fb21521976183b86f315ea995da3d20a353c1939ca0b95,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt
:1710770150053954817,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-lp9mc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0,},Annotations:map[string]string{io.kubernetes.container.hash: 260715c5,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6cc0bc6c9c31812445f19787fe9f10103cd041c632e5f3945c5fe4cb587d6e1,PodSandboxId:f065d53892d2215a73430064e91abed3fa14787a99d4bfab559b65f20111bade,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710770130629900843,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d43b88f75cc44c2f6b3982f84506c72,},Annotations:map[string]string{io.kubernetes.container.hash: 6a7d40b9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6a7d442d079ffad19b9385885b4c5a7d9eedf130c90054839c5f08a863691d34,PodSandboxId:98404eeb33987d5af87d7be090feb2210fa93f68ce1621c8cf80a44bb678eccb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710770130575394174,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1fe59f7fd07c3ccedb94350e669b24c,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53c62f000548b89c9e7e7de7d9dc07271f9eef1a594a4a31410f4f8c52db6e80,PodSandboxId:4e716be2db37fa0e6e908365f559d89f328efdb578f03d419e35d47262b7f700,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710770130500607549,Labels:map[string]string{io.kubernetes.container.name
: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 58c3964d6ce26299f6adbb6721a7ed34,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f9be9273c191dc4513928989b1472edb1147c687cd9e23e33206970ae0d9feb9,PodSandboxId:e080a747a34c0158245305db6f72fc50802b05b35b1f20830d7f758acecdb974,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710770130491047482,Labels:map[string]string{io.kubernetes.containe
r.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-173036,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 884409b4f61232bbd76d8c1825cec4d1,},Annotations:map[string]string{io.kubernetes.container.hash: 248f3412,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81a8ea30-a521-4d52-9494-488352f9ead1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e02b6a06cd9de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   8598ab81f3a84       storage-provisioner
	5173e72f5aab4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   23e47667ab784       coredns-5dd5756b68-p6dw8
	e4097ab56fa71       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   6317fb6e686a8       coredns-5dd5756b68-ft594
	022a13544265e       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   6726d05ea7e5c       kube-proxy-lp9mc
	b6cc0bc6c9c31       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   f065d53892d22       etcd-embed-certs-173036
	6a7d442d079ff       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   98404eeb33987       kube-scheduler-embed-certs-173036
	53c62f000548b       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   4e716be2db37f       kube-controller-manager-embed-certs-173036
	f9be9273c191d       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   e080a747a34c0       kube-apiserver-embed-certs-173036
	
	
	==> coredns [5173e72f5aab4110f03ee50da1976455c04643d418d54191acae86add65becd3] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [e4097ab56fa719a27b89134e102ccb754f1f194d20353b3108c29a5582927375] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-173036
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-173036
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a
	                    minikube.k8s.io/name=embed-certs-173036
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_18T13_55_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 18 Mar 2024 13:55:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-173036
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 18 Mar 2024 14:10:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 18 Mar 2024 14:06:10 +0000   Mon, 18 Mar 2024 13:55:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 18 Mar 2024 14:06:10 +0000   Mon, 18 Mar 2024 13:55:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 18 Mar 2024 14:06:10 +0000   Mon, 18 Mar 2024 13:55:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 18 Mar 2024 14:06:10 +0000   Mon, 18 Mar 2024 13:55:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.191
	  Hostname:    embed-certs-173036
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ab4d497731f64ae3afe9166b2b2e858b
	  System UUID:                ab4d4977-31f6-4ae3-afe9-166b2b2e858b
	  Boot ID:                    9b16ec79-d866-4ab9-9745-eea184e72bf3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-ft594                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5dd5756b68-p6dw8                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-173036                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-173036             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-173036    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-lp9mc                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-173036             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-vzv79               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-173036 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-173036 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-173036 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m   kubelet          Node embed-certs-173036 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node embed-certs-173036 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node embed-certs-173036 event: Registered Node embed-certs-173036 in Controller
	
	
	==> dmesg <==
	[  +0.059560] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.050095] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.869450] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.643233] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.762153] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.316194] systemd-fstab-generator[626]: Ignoring "noauto" option for root device
	[  +0.056970] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066687] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.186881] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.160086] systemd-fstab-generator[665]: Ignoring "noauto" option for root device
	[  +0.303130] systemd-fstab-generator[690]: Ignoring "noauto" option for root device
	[  +5.700271] systemd-fstab-generator[786]: Ignoring "noauto" option for root device
	[  +0.063914] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.149728] systemd-fstab-generator[909]: Ignoring "noauto" option for root device
	[  +5.642199] kauditd_printk_skb: 97 callbacks suppressed
	[  +6.612247] kauditd_printk_skb: 72 callbacks suppressed
	[Mar18 13:55] kauditd_printk_skb: 3 callbacks suppressed
	[  +1.945332] systemd-fstab-generator[3416]: Ignoring "noauto" option for root device
	[  +7.793308] systemd-fstab-generator[3740]: Ignoring "noauto" option for root device
	[  +0.080373] kauditd_printk_skb: 57 callbacks suppressed
	[ +12.897004] systemd-fstab-generator[3939]: Ignoring "noauto" option for root device
	[  +0.110833] kauditd_printk_skb: 12 callbacks suppressed
	[Mar18 13:56] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [b6cc0bc6c9c31812445f19787fe9f10103cd041c632e5f3945c5fe4cb587d6e1] <==
	{"level":"info","ts":"2024-03-18T13:55:31.291996Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-18T13:55:31.292024Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:55:31.299105Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.191:2379"}
	{"level":"info","ts":"2024-03-18T13:55:31.303654Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-18T13:55:31.309468Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-18T13:55:31.304649Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-18T13:55:31.34258Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-18T14:05:32.005222Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":675}
	{"level":"info","ts":"2024-03-18T14:05:32.007958Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":675,"took":"2.004072ms","hash":222660506}
	{"level":"info","ts":"2024-03-18T14:05:32.008054Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":222660506,"revision":675,"compact-revision":-1}
	{"level":"info","ts":"2024-03-18T14:09:21.324985Z","caller":"traceutil/trace.go:171","msg":"trace[258228606] transaction","detail":"{read_only:false; response_revision:1105; number_of_response:1; }","duration":"141.501218ms","start":"2024-03-18T14:09:21.183425Z","end":"2024-03-18T14:09:21.324927Z","steps":["trace[258228606] 'process raft request'  (duration: 140.996899ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:09:21.876027Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.191341ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7894966628957304950 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1104 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-18T14:09:21.876214Z","caller":"traceutil/trace.go:171","msg":"trace[112016806] linearizableReadLoop","detail":"{readStateIndex:1285; appliedIndex:1284; }","duration":"365.134043ms","start":"2024-03-18T14:09:21.51106Z","end":"2024-03-18T14:09:21.876194Z","steps":["trace[112016806] 'read index received'  (duration: 234.833076ms)","trace[112016806] 'applied index is now lower than readState.Index'  (duration: 130.299427ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T14:09:21.876279Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"365.233135ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-18T14:09:21.876345Z","caller":"traceutil/trace.go:171","msg":"trace[2101182150] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1106; }","duration":"365.285239ms","start":"2024-03-18T14:09:21.511036Z","end":"2024-03-18T14:09:21.876321Z","steps":["trace[2101182150] 'agreement among raft nodes before linearized reading'  (duration: 365.206348ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:09:21.876413Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T14:09:21.511023Z","time spent":"365.376588ms","remote":"127.0.0.1:44626","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":28,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"info","ts":"2024-03-18T14:09:21.876368Z","caller":"traceutil/trace.go:171","msg":"trace[1444729847] transaction","detail":"{read_only:false; response_revision:1106; number_of_response:1; }","duration":"545.905982ms","start":"2024-03-18T14:09:21.330396Z","end":"2024-03-18T14:09:21.876302Z","steps":["trace[1444729847] 'process raft request'  (duration: 415.550372ms)","trace[1444729847] 'compare'  (duration: 127.907804ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-18T14:09:21.876719Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-18T14:09:21.330376Z","time spent":"546.26058ms","remote":"127.0.0.1:44602","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1104 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-18T14:09:58.276012Z","caller":"traceutil/trace.go:171","msg":"trace[734429679] transaction","detail":"{read_only:false; response_revision:1135; number_of_response:1; }","duration":"179.338441ms","start":"2024-03-18T14:09:58.096625Z","end":"2024-03-18T14:09:58.275963Z","steps":["trace[734429679] 'process raft request'  (duration: 179.223503ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T14:10:28.605327Z","caller":"traceutil/trace.go:171","msg":"trace[131338744] transaction","detail":"{read_only:false; response_revision:1159; number_of_response:1; }","duration":"130.712248ms","start":"2024-03-18T14:10:28.474582Z","end":"2024-03-18T14:10:28.605294Z","steps":["trace[131338744] 'process raft request'  (duration: 130.1629ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-18T14:10:28.86079Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.929769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/prioritylevelconfigurations/\" range_end:\"/registry/prioritylevelconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-03-18T14:10:28.860912Z","caller":"traceutil/trace.go:171","msg":"trace[367334971] range","detail":"{range_begin:/registry/prioritylevelconfigurations/; range_end:/registry/prioritylevelconfigurations0; response_count:0; response_revision:1159; }","duration":"196.529818ms","start":"2024-03-18T14:10:28.664355Z","end":"2024-03-18T14:10:28.860885Z","steps":["trace[367334971] 'count revisions from in-memory index tree'  (duration: 195.852022ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-18T14:10:32.012381Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":919}
	{"level":"info","ts":"2024-03-18T14:10:32.015084Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":919,"took":"2.082847ms","hash":1877803735}
	{"level":"info","ts":"2024-03-18T14:10:32.015173Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1877803735,"revision":919,"compact-revision":675}
	
	
	==> kernel <==
	 14:10:41 up 20 min,  0 users,  load average: 0.47, 0.16, 0.12
	Linux embed-certs-173036 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [f9be9273c191dc4513928989b1472edb1147c687cd9e23e33206970ae0d9feb9] <==
	I0318 14:08:33.605148       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:08:34.743376       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:08:34.743499       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:08:34.743657       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0318 14:08:34.744993       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:08:34.745158       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:08:34.745205       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0318 14:09:21.878917       1 trace.go:236] Trace[1517510529]: "Update" accept:application/json, */*,audit-id:9f12f3ab-4fb2-4bf7-aaa2-007957817a6b,client:192.168.50.191,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (18-Mar-2024 14:09:21.328) (total time: 550ms):
	Trace[1517510529]: ["GuaranteedUpdate etcd3" audit-id:9f12f3ab-4fb2-4bf7-aaa2-007957817a6b,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 550ms (14:09:21.328)
	Trace[1517510529]:  ---"Txn call completed" 548ms (14:09:21.878)]
	Trace[1517510529]: [550.355581ms] [550.355581ms] END
	I0318 14:09:33.605450       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0318 14:10:33.605164       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:10:33.748382       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:10:33.748689       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:10:33.749134       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0318 14:10:34.749637       1 handler_proxy.go:93] no RequestInfo found in the context
	W0318 14:10:34.749650       1 handler_proxy.go:93] no RequestInfo found in the context
	E0318 14:10:34.749877       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0318 14:10:34.749890       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0318 14:10:34.749934       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0318 14:10:34.751994       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [53c62f000548b89c9e7e7de7d9dc07271f9eef1a594a4a31410f4f8c52db6e80] <==
	I0318 14:04:49.295611       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:05:18.800667       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:05:19.304171       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:05:48.807242       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:05:49.314209       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:06:18.812761       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:06:19.325039       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:06:48.822691       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:06:49.335197       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:07:07.050413       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="408.273µs"
	E0318 14:07:18.829626       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:07:19.344138       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0318 14:07:20.041623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="158.342µs"
	E0318 14:07:48.835981       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:07:49.352728       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:08:18.842775       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:08:19.363369       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:08:48.852362       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:08:49.375193       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:09:18.860389       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:09:19.388090       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:09:48.867613       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:09:49.403748       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0318 14:10:18.873944       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0318 14:10:19.413496       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [022a13544265e467f4f462adc1005aa84022e97ff5734ca437819670e1af1bda] <==
	I0318 13:55:51.064135       1 server_others.go:69] "Using iptables proxy"
	I0318 13:55:51.424719       1 node.go:141] Successfully retrieved node IP: 192.168.50.191
	I0318 13:55:51.606939       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0318 13:55:51.606994       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0318 13:55:51.610882       1 server_others.go:152] "Using iptables Proxier"
	I0318 13:55:51.612260       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0318 13:55:51.612474       1 server.go:846] "Version info" version="v1.28.4"
	I0318 13:55:51.612610       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0318 13:55:51.616069       1 config.go:315] "Starting node config controller"
	I0318 13:55:51.617201       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0318 13:55:51.623684       1 config.go:188] "Starting service config controller"
	I0318 13:55:51.623694       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0318 13:55:51.623710       1 config.go:97] "Starting endpoint slice config controller"
	I0318 13:55:51.623713       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0318 13:55:51.717744       1 shared_informer.go:318] Caches are synced for node config
	I0318 13:55:51.723828       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0318 13:55:51.723863       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [6a7d442d079ffad19b9385885b4c5a7d9eedf130c90054839c5f08a863691d34] <==
	W0318 13:55:33.755810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:55:33.755818       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:55:34.578006       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0318 13:55:34.578151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0318 13:55:34.670886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0318 13:55:34.670938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0318 13:55:34.711898       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0318 13:55:34.711955       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0318 13:55:34.735975       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0318 13:55:34.736104       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0318 13:55:34.822295       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0318 13:55:34.822384       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0318 13:55:34.902487       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0318 13:55:34.902626       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0318 13:55:34.910155       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0318 13:55:34.910300       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0318 13:55:34.962110       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0318 13:55:34.962241       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0318 13:55:35.007697       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0318 13:55:35.007811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0318 13:55:35.038237       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0318 13:55:35.038617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0318 13:55:35.259434       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0318 13:55:35.259639       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0318 13:55:38.144473       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 18 14:08:37 embed-certs-173036 kubelet[3747]: E0318 14:08:37.024609    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:08:37 embed-certs-173036 kubelet[3747]: E0318 14:08:37.111420    3747 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:08:37 embed-certs-173036 kubelet[3747]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:08:37 embed-certs-173036 kubelet[3747]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:08:37 embed-certs-173036 kubelet[3747]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:08:37 embed-certs-173036 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:08:48 embed-certs-173036 kubelet[3747]: E0318 14:08:48.023628    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:08:59 embed-certs-173036 kubelet[3747]: E0318 14:08:59.024037    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:09:14 embed-certs-173036 kubelet[3747]: E0318 14:09:14.024591    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:09:26 embed-certs-173036 kubelet[3747]: E0318 14:09:26.025206    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:09:37 embed-certs-173036 kubelet[3747]: E0318 14:09:37.112996    3747 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:09:37 embed-certs-173036 kubelet[3747]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:09:37 embed-certs-173036 kubelet[3747]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:09:37 embed-certs-173036 kubelet[3747]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:09:37 embed-certs-173036 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 18 14:09:40 embed-certs-173036 kubelet[3747]: E0318 14:09:40.023711    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:09:52 embed-certs-173036 kubelet[3747]: E0318 14:09:52.023319    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:10:06 embed-certs-173036 kubelet[3747]: E0318 14:10:06.023839    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:10:20 embed-certs-173036 kubelet[3747]: E0318 14:10:20.024275    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:10:31 embed-certs-173036 kubelet[3747]: E0318 14:10:31.028283    3747 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vzv79" podUID="1fc71314-b3e7-4113-b254-557ec39eef43"
	Mar 18 14:10:37 embed-certs-173036 kubelet[3747]: E0318 14:10:37.112657    3747 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 18 14:10:37 embed-certs-173036 kubelet[3747]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 18 14:10:37 embed-certs-173036 kubelet[3747]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 18 14:10:37 embed-certs-173036 kubelet[3747]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 18 14:10:37 embed-certs-173036 kubelet[3747]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [e02b6a06cd9de57f8956e47d537c1911e3b686e4fe2f89b4cc0c330ec1395f50] <==
	I0318 13:55:52.654203       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0318 13:55:52.667874       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0318 13:55:52.668004       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0318 13:55:52.679133       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0318 13:55:52.679787       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"91f8549e-c7c4-4c23-8b09-71ff2f50ff8e", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-173036_4b411dd9-13be-42e9-9d2c-c400698f3785 became leader
	I0318 13:55:52.679906       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-173036_4b411dd9-13be-42e9-9d2c-c400698f3785!
	I0318 13:55:52.780705       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-173036_4b411dd9-13be-42e9-9d2c-c400698f3785!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-173036 -n embed-certs-173036
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-173036 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vzv79
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-173036 describe pod metrics-server-57f55c9bc5-vzv79
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-173036 describe pod metrics-server-57f55c9bc5-vzv79: exit status 1 (75.837527ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vzv79" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-173036 describe pod metrics-server-57f55c9bc5-vzv79: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (343.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (105.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.135:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.135:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-909137 -n old-k8s-version-909137
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 2 (255.368199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-909137" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-909137 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-909137 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.203µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-909137 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 2 (249.244619ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-909137 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-909137 logs -n 25: (1.519485668s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-599578                           | kubernetes-upgrade-599578    | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:39 UTC |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:39 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p pause-760389                                        | pause-760389                 | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:40 UTC |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:40 UTC | 18 Mar 24 13:41 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-537883                              | cert-expiration-537883       | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	| delete  | -p                                                     | disable-driver-mounts-173866 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | disable-driver-mounts-173866                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:42 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-173036            | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC | 18 Mar 24 13:41 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:41 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-537236             | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC | 18 Mar 24 13:42 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:42 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-569210  | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC | 18 Mar 24 13:43 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-909137        | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:43 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-173036                 | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-173036                                  | embed-certs-173036           | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-537236                  | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-537236                                   | no-preload-537236            | jenkins | v1.32.0 | 18 Mar 24 13:44 UTC | 18 Mar 24 13:55 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-909137             | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-909137                              | old-k8s-version-909137       | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-569210       | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-569210 | jenkins | v1.32.0 | 18 Mar 24 13:45 UTC | 18 Mar 24 13:55 UTC |
	|         | default-k8s-diff-port-569210                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 13:45:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 13:45:41.667747 1157887 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:45:41.667937 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.667952 1157887 out.go:304] Setting ErrFile to fd 2...
	I0318 13:45:41.667958 1157887 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:45:41.668616 1157887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:45:41.669251 1157887 out.go:298] Setting JSON to false
	I0318 13:45:41.670283 1157887 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":19689,"bootTime":1710749853,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 13:45:41.670349 1157887 start.go:139] virtualization: kvm guest
	I0318 13:45:41.672702 1157887 out.go:177] * [default-k8s-diff-port-569210] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 13:45:41.674325 1157887 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 13:45:41.674336 1157887 notify.go:220] Checking for updates...
	I0318 13:45:41.675874 1157887 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 13:45:41.677543 1157887 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:45:41.679053 1157887 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 13:45:41.680344 1157887 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 13:45:41.681702 1157887 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 13:45:41.683304 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:45:41.683743 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.683792 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.698719 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44147
	I0318 13:45:41.699154 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.699657 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.699676 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.699995 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.700168 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.700488 1157887 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 13:45:41.700763 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:45:41.700803 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:45:41.715824 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44459
	I0318 13:45:41.716270 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:45:41.716688 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:45:41.716708 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:45:41.717004 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:45:41.717185 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:45:41.747564 1157887 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 13:45:41.748930 1157887 start.go:297] selected driver: kvm2
	I0318 13:45:41.748944 1157887 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.749059 1157887 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 13:45:41.749725 1157887 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.749819 1157887 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 13:45:41.764225 1157887 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 13:45:41.764607 1157887 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:45:41.764679 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:45:41.764692 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:45:41.764727 1157887 start.go:340] cluster config:
	{Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:45:41.764824 1157887 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 13:45:41.766561 1157887 out.go:177] * Starting "default-k8s-diff-port-569210" primary control-plane node in "default-k8s-diff-port-569210" cluster
	I0318 13:45:40.044635 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:41.767747 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:45:41.767779 1157887 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 13:45:41.767799 1157887 cache.go:56] Caching tarball of preloaded images
	I0318 13:45:41.767876 1157887 preload.go:173] Found /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0318 13:45:41.767887 1157887 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0318 13:45:41.767986 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:45:41.768151 1157887 start.go:360] acquireMachinesLock for default-k8s-diff-port-569210: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:45:46.124607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:49.196561 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:55.276657 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:45:58.348606 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:04.428632 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:07.500592 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:13.584558 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:16.652578 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:22.732573 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:25.804745 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:31.884579 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:34.956708 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:41.036614 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:44.108576 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:50.188610 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:53.260646 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:46:59.340724 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:02.412698 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:08.492603 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:11.564634 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:17.644618 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:20.716642 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:26.796585 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:29.868690 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:35.948613 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:39.020607 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:45.104563 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:48.172547 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:54.252608 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:47:57.324659 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:03.404600 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:06.476647 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:12.556609 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:15.628640 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:21.708597 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:24.780572 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:30.860662 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:33.932528 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:40.012616 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:43.084569 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:49.164622 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:52.236652 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:48:58.316619 1157263 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.191:22: connect: no route to host
	I0318 13:49:01.321139 1157416 start.go:364] duration metric: took 4m21.279664055s to acquireMachinesLock for "no-preload-537236"
	I0318 13:49:01.321252 1157416 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:01.321260 1157416 fix.go:54] fixHost starting: 
	I0318 13:49:01.321627 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:01.321658 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:01.337337 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39431
	I0318 13:49:01.337793 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:01.338235 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:49:01.338262 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:01.338703 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:01.338892 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:01.339025 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:49:01.340630 1157416 fix.go:112] recreateIfNeeded on no-preload-537236: state=Stopped err=<nil>
	I0318 13:49:01.340653 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	W0318 13:49:01.340785 1157416 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:01.342565 1157416 out.go:177] * Restarting existing kvm2 VM for "no-preload-537236" ...
	I0318 13:49:01.318340 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:01.318378 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.318795 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:49:01.318829 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:49:01.319041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:49:01.321007 1157263 machine.go:97] duration metric: took 4m37.382603693s to provisionDockerMachine
	I0318 13:49:01.321051 1157263 fix.go:56] duration metric: took 4m37.403420427s for fixHost
	I0318 13:49:01.321064 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 4m37.403446357s
	W0318 13:49:01.321088 1157263 start.go:713] error starting host: provision: host is not running
	W0318 13:49:01.321225 1157263 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0318 13:49:01.321242 1157263 start.go:728] Will try again in 5 seconds ...
	I0318 13:49:01.343844 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Start
	I0318 13:49:01.344003 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring networks are active...
	I0318 13:49:01.344698 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network default is active
	I0318 13:49:01.345062 1157416 main.go:141] libmachine: (no-preload-537236) Ensuring network mk-no-preload-537236 is active
	I0318 13:49:01.345378 1157416 main.go:141] libmachine: (no-preload-537236) Getting domain xml...
	I0318 13:49:01.346073 1157416 main.go:141] libmachine: (no-preload-537236) Creating domain...
	I0318 13:49:02.522163 1157416 main.go:141] libmachine: (no-preload-537236) Waiting to get IP...
	I0318 13:49:02.522935 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.523347 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.523420 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.523327 1158392 retry.go:31] will retry after 276.248352ms: waiting for machine to come up
	I0318 13:49:02.800962 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:02.801439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:02.801472 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:02.801381 1158392 retry.go:31] will retry after 318.94167ms: waiting for machine to come up
	I0318 13:49:03.121895 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.122276 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.122298 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.122254 1158392 retry.go:31] will retry after 353.742872ms: waiting for machine to come up
	I0318 13:49:03.477885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.478401 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.478439 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.478360 1158392 retry.go:31] will retry after 481.537084ms: waiting for machine to come up
	I0318 13:49:03.960991 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:03.961432 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:03.961505 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:03.961416 1158392 retry.go:31] will retry after 647.244695ms: waiting for machine to come up
	I0318 13:49:04.610150 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:04.610563 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:04.610604 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:04.610512 1158392 retry.go:31] will retry after 577.22264ms: waiting for machine to come up
	I0318 13:49:06.321404 1157263 start.go:360] acquireMachinesLock for embed-certs-173036: {Name:mk0b1a2e71faf079d0c16c4e1393bdff17be3dfd Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0318 13:49:05.189300 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:05.189688 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:05.189722 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:05.189635 1158392 retry.go:31] will retry after 1.064347528s: waiting for machine to come up
	I0318 13:49:06.255734 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:06.256071 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:06.256103 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:06.256016 1158392 retry.go:31] will retry after 1.359025709s: waiting for machine to come up
	I0318 13:49:07.616847 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:07.617313 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:07.617338 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:07.617265 1158392 retry.go:31] will retry after 1.844112s: waiting for machine to come up
	I0318 13:49:09.464239 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:09.464761 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:09.464788 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:09.464703 1158392 retry.go:31] will retry after 1.984375986s: waiting for machine to come up
	I0318 13:49:11.450609 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:11.451100 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:11.451153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:11.451037 1158392 retry.go:31] will retry after 1.944733714s: waiting for machine to come up
	I0318 13:49:13.397815 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:13.398238 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:13.398265 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:13.398190 1158392 retry.go:31] will retry after 2.44494826s: waiting for machine to come up
	I0318 13:49:15.845711 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:15.846169 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:15.846212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:15.846128 1158392 retry.go:31] will retry after 2.760857339s: waiting for machine to come up
	I0318 13:49:18.609516 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:18.609917 1157416 main.go:141] libmachine: (no-preload-537236) DBG | unable to find current IP address of domain no-preload-537236 in network mk-no-preload-537236
	I0318 13:49:18.609942 1157416 main.go:141] libmachine: (no-preload-537236) DBG | I0318 13:49:18.609872 1158392 retry.go:31] will retry after 3.501792324s: waiting for machine to come up
	I0318 13:49:23.501689 1157708 start.go:364] duration metric: took 4m10.403284517s to acquireMachinesLock for "old-k8s-version-909137"
	I0318 13:49:23.501769 1157708 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:23.501783 1157708 fix.go:54] fixHost starting: 
	I0318 13:49:23.502238 1157708 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:23.502279 1157708 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:23.520223 1157708 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0318 13:49:23.520696 1157708 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:23.521273 1157708 main.go:141] libmachine: Using API Version  1
	I0318 13:49:23.521304 1157708 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:23.521693 1157708 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:23.521934 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:23.522089 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetState
	I0318 13:49:23.523696 1157708 fix.go:112] recreateIfNeeded on old-k8s-version-909137: state=Stopped err=<nil>
	I0318 13:49:23.523738 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	W0318 13:49:23.523894 1157708 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:23.526253 1157708 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-909137" ...
	I0318 13:49:22.113291 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.113733 1157416 main.go:141] libmachine: (no-preload-537236) Found IP for machine: 192.168.39.7
	I0318 13:49:22.113753 1157416 main.go:141] libmachine: (no-preload-537236) Reserving static IP address...
	I0318 13:49:22.113787 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has current primary IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.114159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.114179 1157416 main.go:141] libmachine: (no-preload-537236) DBG | skip adding static IP to network mk-no-preload-537236 - found existing host DHCP lease matching {name: "no-preload-537236", mac: "52:54:00:21:a8:12", ip: "192.168.39.7"}
	I0318 13:49:22.114192 1157416 main.go:141] libmachine: (no-preload-537236) Reserved static IP address: 192.168.39.7
	I0318 13:49:22.114201 1157416 main.go:141] libmachine: (no-preload-537236) Waiting for SSH to be available...
	I0318 13:49:22.114208 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Getting to WaitForSSH function...
	I0318 13:49:22.116603 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.116944 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.116971 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.117082 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH client type: external
	I0318 13:49:22.117153 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa (-rw-------)
	I0318 13:49:22.117192 1157416 main.go:141] libmachine: (no-preload-537236) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.7 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:22.117212 1157416 main.go:141] libmachine: (no-preload-537236) DBG | About to run SSH command:
	I0318 13:49:22.117236 1157416 main.go:141] libmachine: (no-preload-537236) DBG | exit 0
	I0318 13:49:22.240543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:22.240913 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetConfigRaw
	I0318 13:49:22.241611 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.244016 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244273 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.244302 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.244506 1157416 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/config.json ...
	I0318 13:49:22.244729 1157416 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:22.244750 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:22.244947 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.246869 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247160 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.247198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.247246 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.247401 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247546 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.247722 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.247893 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.248160 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.248174 1157416 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:22.353134 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:22.353164 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353435 1157416 buildroot.go:166] provisioning hostname "no-preload-537236"
	I0318 13:49:22.353463 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.353636 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.356058 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356463 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.356491 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.356645 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.356846 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.356965 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.357068 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.357201 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.357415 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.357434 1157416 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-537236 && echo "no-preload-537236" | sudo tee /etc/hostname
	I0318 13:49:22.477651 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-537236
	
	I0318 13:49:22.477692 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.480537 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.480876 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.480905 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.481135 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.481342 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481520 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.481676 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.481887 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.482066 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.482082 1157416 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-537236' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-537236/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-537236' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:22.599489 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:22.599566 1157416 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:22.599596 1157416 buildroot.go:174] setting up certificates
	I0318 13:49:22.599609 1157416 provision.go:84] configureAuth start
	I0318 13:49:22.599624 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetMachineName
	I0318 13:49:22.599981 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:22.602425 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602800 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.602831 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.602986 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.605036 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605331 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.605356 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.605500 1157416 provision.go:143] copyHostCerts
	I0318 13:49:22.605589 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:22.605600 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:22.605665 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:22.605786 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:22.605795 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:22.605820 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:22.605895 1157416 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:22.605904 1157416 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:22.605927 1157416 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:22.606003 1157416 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.no-preload-537236 san=[127.0.0.1 192.168.39.7 localhost minikube no-preload-537236]
	I0318 13:49:22.810156 1157416 provision.go:177] copyRemoteCerts
	I0318 13:49:22.810249 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:22.810283 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.813018 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813343 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.813376 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.813557 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.813743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.813890 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.814080 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:22.898886 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:22.926296 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0318 13:49:22.953260 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0318 13:49:22.981248 1157416 provision.go:87] duration metric: took 381.624842ms to configureAuth
	I0318 13:49:22.981281 1157416 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:22.981459 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:49:22.981573 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:22.984446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.984848 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:22.984885 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:22.985061 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:22.985269 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985405 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:22.985595 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:22.985728 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:22.985911 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:22.985925 1157416 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:23.259439 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:23.259470 1157416 machine.go:97] duration metric: took 1.014725867s to provisionDockerMachine
	I0318 13:49:23.259483 1157416 start.go:293] postStartSetup for "no-preload-537236" (driver="kvm2")
	I0318 13:49:23.259518 1157416 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:23.259553 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.259937 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:23.259976 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.262875 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263196 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.263228 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.263403 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.263684 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.263861 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.264029 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.348815 1157416 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:23.353550 1157416 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:23.353582 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:23.353659 1157416 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:23.353759 1157416 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:23.353885 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:23.364831 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:23.391345 1157416 start.go:296] duration metric: took 131.846395ms for postStartSetup
	I0318 13:49:23.391396 1157416 fix.go:56] duration metric: took 22.070135111s for fixHost
	I0318 13:49:23.391423 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.394229 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394543 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.394583 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.394685 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.394937 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395111 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.395266 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.395433 1157416 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:23.395619 1157416 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.7 22 <nil> <nil>}
	I0318 13:49:23.395631 1157416 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:23.501504 1157416 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769763.449975975
	
	I0318 13:49:23.501532 1157416 fix.go:216] guest clock: 1710769763.449975975
	I0318 13:49:23.501542 1157416 fix.go:229] Guest: 2024-03-18 13:49:23.449975975 +0000 UTC Remote: 2024-03-18 13:49:23.39140181 +0000 UTC m=+283.498114537 (delta=58.574165ms)
	I0318 13:49:23.501564 1157416 fix.go:200] guest clock delta is within tolerance: 58.574165ms
	I0318 13:49:23.501584 1157416 start.go:83] releasing machines lock for "no-preload-537236", held for 22.180386627s
	I0318 13:49:23.501612 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.501900 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:23.504693 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505130 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.505159 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.505331 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.505889 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506092 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:49:23.506198 1157416 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:23.506252 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.506317 1157416 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:23.506351 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:49:23.509104 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509414 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509446 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509465 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509625 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.509819 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:23.509839 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:23.509853 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510043 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510103 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:49:23.510207 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.510261 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:49:23.510394 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:49:23.510541 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:49:23.616831 1157416 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:23.624184 1157416 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:23.779709 1157416 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:23.786535 1157416 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:23.786594 1157416 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:23.805716 1157416 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:23.805743 1157416 start.go:494] detecting cgroup driver to use...
	I0318 13:49:23.805850 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:23.825572 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:23.842762 1157416 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:23.842817 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:23.859385 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:23.876416 1157416 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:24.005995 1157416 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:24.193107 1157416 docker.go:233] disabling docker service ...
	I0318 13:49:24.193173 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:24.212825 1157416 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:24.230448 1157416 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:24.385445 1157416 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:24.548640 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:24.564678 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:24.592528 1157416 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:49:24.592601 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.604303 1157416 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:24.604394 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.616123 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.627956 1157416 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:24.639194 1157416 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:24.650789 1157416 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:24.661390 1157416 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:24.661443 1157416 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:24.677180 1157416 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:24.687973 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:24.827386 1157416 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:24.978805 1157416 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:24.978898 1157416 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:24.985647 1157416 start.go:562] Will wait 60s for crictl version
	I0318 13:49:24.985735 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:24.990325 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:25.038948 1157416 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:25.039020 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.068855 1157416 ssh_runner.go:195] Run: crio --version
	I0318 13:49:25.107104 1157416 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0318 13:49:23.527811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .Start
	I0318 13:49:23.528000 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring networks are active...
	I0318 13:49:23.528714 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network default is active
	I0318 13:49:23.529036 1157708 main.go:141] libmachine: (old-k8s-version-909137) Ensuring network mk-old-k8s-version-909137 is active
	I0318 13:49:23.529491 1157708 main.go:141] libmachine: (old-k8s-version-909137) Getting domain xml...
	I0318 13:49:23.530324 1157708 main.go:141] libmachine: (old-k8s-version-909137) Creating domain...
	I0318 13:49:24.765648 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting to get IP...
	I0318 13:49:24.766664 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:24.767122 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:24.767182 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:24.767081 1158507 retry.go:31] will retry after 250.785143ms: waiting for machine to come up
	I0318 13:49:25.019755 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.020238 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.020273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.020185 1158507 retry.go:31] will retry after 346.894257ms: waiting for machine to come up
	I0318 13:49:25.368815 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.369335 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.369372 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.369268 1158507 retry.go:31] will retry after 367.316359ms: waiting for machine to come up
	I0318 13:49:25.737835 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:25.738404 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:25.738438 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:25.738337 1158507 retry.go:31] will retry after 479.291041ms: waiting for machine to come up
	I0318 13:49:26.219103 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.219568 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.219599 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.219523 1158507 retry.go:31] will retry after 552.309382ms: waiting for machine to come up
	I0318 13:49:26.773363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:26.773905 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:26.773935 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:26.773857 1158507 retry.go:31] will retry after 703.087388ms: waiting for machine to come up
	I0318 13:49:27.478730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:27.479330 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:27.479363 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:27.479270 1158507 retry.go:31] will retry after 1.136606935s: waiting for machine to come up
	I0318 13:49:25.108504 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetIP
	I0318 13:49:25.111416 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.111795 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:49:25.111827 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:49:25.112035 1157416 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:25.116688 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:25.131526 1157416 kubeadm.go:877] updating cluster {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:25.131663 1157416 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 13:49:25.131698 1157416 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:25.176340 1157416 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0318 13:49:25.176378 1157416 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:25.176474 1157416 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.176487 1157416 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.176524 1157416 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.176537 1157416 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.176592 1157416 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.176619 1157416 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.176773 1157416 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0318 13:49:25.176789 1157416 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.178485 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.178486 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.178488 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.178480 1157416 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.178479 1157416 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:25.178540 1157416 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0318 13:49:25.178911 1157416 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334172 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.334873 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0318 13:49:25.338330 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.338825 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.340192 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.350053 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.356621 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.472528 1157416 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0318 13:49:25.472571 1157416 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.472627 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.630923 1157416 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0318 13:49:25.630996 1157416 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.631001 1157416 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0318 13:49:25.631042 1157416 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.630933 1157416 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0318 13:49:25.631089 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631102 1157416 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0318 13:49:25.631134 1157416 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.631107 1157416 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.631169 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631183 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631052 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.631199 1157416 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0318 13:49:25.631220 1157416 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.631233 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0318 13:49:25.631264 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:25.642598 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0318 13:49:25.708001 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0318 13:49:25.708026 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708068 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0318 13:49:25.708003 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0318 13:49:25.708129 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:25.708162 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0318 13:49:25.708225 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.708286 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.790492 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.790623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:25.804436 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804465 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804503 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0318 13:49:25.804532 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0318 13:49:25.804583 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:25.804657 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0318 13:49:25.804684 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804720 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0318 13:49:25.804768 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:25.804801 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:25.807681 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0318 13:49:26.162719 1157416 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.887846 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.083277557s)
	I0318 13:49:27.887882 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.083274384s)
	I0318 13:49:27.887894 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0318 13:49:27.887916 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0318 13:49:27.887927 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.887944 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1: (2.083121634s)
	I0318 13:49:27.887971 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0318 13:49:27.887971 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.083181595s)
	I0318 13:49:27.887990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0318 13:49:27.888003 1157416 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (1.725256044s)
	I0318 13:49:27.888008 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0318 13:49:27.888040 1157416 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0318 13:49:27.888080 1157416 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:27.888114 1157416 ssh_runner.go:195] Run: which crictl
	I0318 13:49:27.893415 1157416 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:28.617273 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:28.617711 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:28.617740 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:28.617665 1158507 retry.go:31] will retry after 947.818334ms: waiting for machine to come up
	I0318 13:49:29.566814 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:29.567157 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:29.567177 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:29.567121 1158507 retry.go:31] will retry after 1.328243934s: waiting for machine to come up
	I0318 13:49:30.897514 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:30.898041 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:30.898068 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:30.897988 1158507 retry.go:31] will retry after 2.213855703s: waiting for machine to come up
	I0318 13:49:30.272393 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.384351202s)
	I0318 13:49:30.272442 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0318 13:49:30.272459 1157416 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.379011748s)
	I0318 13:49:30.272477 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272508 1157416 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0318 13:49:30.272589 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0318 13:49:30.272623 1157416 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:32.857821 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.585192694s)
	I0318 13:49:32.857907 1157416 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.585263486s)
	I0318 13:49:32.857990 1157416 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0318 13:49:32.857918 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0318 13:49:32.858038 1157416 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:32.858097 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0318 13:49:33.113781 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:33.114303 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:33.114332 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:33.114245 1158507 retry.go:31] will retry after 2.075415123s: waiting for machine to come up
	I0318 13:49:35.191096 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:35.191631 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:35.191665 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:35.191582 1158507 retry.go:31] will retry after 3.520577528s: waiting for machine to come up
	I0318 13:49:36.677356 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.8192286s)
	I0318 13:49:36.677398 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0318 13:49:36.677423 1157416 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:36.677464 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0318 13:49:38.844843 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.167353366s)
	I0318 13:49:38.844895 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0318 13:49:38.844933 1157416 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.845020 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0318 13:49:38.713777 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:38.714129 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | unable to find current IP address of domain old-k8s-version-909137 in network mk-old-k8s-version-909137
	I0318 13:49:38.714242 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | I0318 13:49:38.714143 1158507 retry.go:31] will retry after 3.46520277s: waiting for machine to come up
	I0318 13:49:42.181399 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181856 1157708 main.go:141] libmachine: (old-k8s-version-909137) Found IP for machine: 192.168.72.135
	I0318 13:49:42.181888 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has current primary IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.181897 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserving static IP address...
	I0318 13:49:42.182344 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.182387 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | skip adding static IP to network mk-old-k8s-version-909137 - found existing host DHCP lease matching {name: "old-k8s-version-909137", mac: "52:54:00:58:c0:cb", ip: "192.168.72.135"}
	I0318 13:49:42.182424 1157708 main.go:141] libmachine: (old-k8s-version-909137) Reserved static IP address: 192.168.72.135
	I0318 13:49:42.182453 1157708 main.go:141] libmachine: (old-k8s-version-909137) Waiting for SSH to be available...
	I0318 13:49:42.182470 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Getting to WaitForSSH function...
	I0318 13:49:42.184589 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.184958 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.184999 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.185061 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH client type: external
	I0318 13:49:42.185120 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa (-rw-------)
	I0318 13:49:42.185162 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.135 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:49:42.185189 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | About to run SSH command:
	I0318 13:49:42.185204 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | exit 0
	I0318 13:49:42.312570 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | SSH cmd err, output: <nil>: 
	I0318 13:49:42.313005 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetConfigRaw
	I0318 13:49:42.313693 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.316497 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.316931 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.316965 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.317239 1157708 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/config.json ...
	I0318 13:49:42.317442 1157708 machine.go:94] provisionDockerMachine start ...
	I0318 13:49:42.317462 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:42.317688 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.320076 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320444 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.320485 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.320655 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.320818 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.320980 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.321093 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.321257 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.321510 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.321528 1157708 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:49:42.433138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:49:42.433186 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433524 1157708 buildroot.go:166] provisioning hostname "old-k8s-version-909137"
	I0318 13:49:42.433558 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.433808 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.436869 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437230 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.437264 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.437506 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.437739 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.437915 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.438092 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.438285 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.438513 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.438534 1157708 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-909137 && echo "old-k8s-version-909137" | sudo tee /etc/hostname
	I0318 13:49:42.560410 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-909137
	
	I0318 13:49:42.560439 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.563304 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563637 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.563673 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.563837 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.564053 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564236 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.564377 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.564581 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:42.564802 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:42.564820 1157708 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-909137' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-909137/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-909137' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:49:42.687138 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:49:42.687173 1157708 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:49:42.687199 1157708 buildroot.go:174] setting up certificates
	I0318 13:49:42.687211 1157708 provision.go:84] configureAuth start
	I0318 13:49:42.687223 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetMachineName
	I0318 13:49:42.687600 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:42.690738 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.691179 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.691316 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.693730 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694070 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.694092 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.694255 1157708 provision.go:143] copyHostCerts
	I0318 13:49:42.694336 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:49:42.694350 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:49:42.694422 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:49:42.694597 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:49:42.694614 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:49:42.694652 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:49:42.694747 1157708 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:49:42.694756 1157708 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:49:42.694775 1157708 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:49:42.694823 1157708 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-909137 san=[127.0.0.1 192.168.72.135 localhost minikube old-k8s-version-909137]
	I0318 13:49:42.920182 1157708 provision.go:177] copyRemoteCerts
	I0318 13:49:42.920255 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:49:42.920295 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:42.923074 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923374 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:42.923408 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:42.923533 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:42.923755 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:42.923957 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:42.924095 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.649771 1157887 start.go:364] duration metric: took 4m1.881584436s to acquireMachinesLock for "default-k8s-diff-port-569210"
	I0318 13:49:43.649850 1157887 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:49:43.649868 1157887 fix.go:54] fixHost starting: 
	I0318 13:49:43.650335 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:49:43.650378 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:49:43.668606 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36261
	I0318 13:49:43.669107 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:49:43.669721 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:49:43.669755 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:49:43.670092 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:49:43.670269 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:49:43.670427 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:49:43.671973 1157887 fix.go:112] recreateIfNeeded on default-k8s-diff-port-569210: state=Stopped err=<nil>
	I0318 13:49:43.672021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	W0318 13:49:43.672150 1157887 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:49:43.673832 1157887 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-569210" ...
	I0318 13:49:40.621208 1157416 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.776156882s)
	I0318 13:49:40.621252 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0318 13:49:40.621281 1157416 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:40.621322 1157416 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0318 13:49:41.582256 1157416 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0318 13:49:41.582316 1157416 cache_images.go:123] Successfully loaded all cached images
	I0318 13:49:41.582324 1157416 cache_images.go:92] duration metric: took 16.405930257s to LoadCachedImages
	I0318 13:49:41.582341 1157416 kubeadm.go:928] updating node { 192.168.39.7 8443 v1.29.0-rc.2 crio true true} ...
	I0318 13:49:41.582550 1157416 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-537236 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:41.582663 1157416 ssh_runner.go:195] Run: crio config
	I0318 13:49:41.635043 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:41.635074 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:41.635093 1157416 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:41.635128 1157416 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.7 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-537236 NodeName:no-preload-537236 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:49:41.635322 1157416 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-537236"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:41.635446 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0318 13:49:41.647072 1157416 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:41.647148 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:41.657448 1157416 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0318 13:49:41.675819 1157416 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0318 13:49:41.693989 1157416 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I0318 13:49:41.714954 1157416 ssh_runner.go:195] Run: grep 192.168.39.7	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:41.719161 1157416 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.7	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:41.732228 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:41.871286 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:41.892827 1157416 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236 for IP: 192.168.39.7
	I0318 13:49:41.892850 1157416 certs.go:194] generating shared ca certs ...
	I0318 13:49:41.892868 1157416 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:41.893054 1157416 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:41.893110 1157416 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:41.893125 1157416 certs.go:256] generating profile certs ...
	I0318 13:49:41.893246 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/client.key
	I0318 13:49:41.893317 1157416 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key.844e83a6
	I0318 13:49:41.893366 1157416 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key
	I0318 13:49:41.893482 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:41.893518 1157416 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:41.893528 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:41.893552 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:41.893573 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:41.893594 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:41.893628 1157416 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:41.894503 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:41.942278 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:41.978436 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:42.007161 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:42.036410 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0318 13:49:42.073179 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:42.098201 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:42.131599 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/no-preload-537236/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:42.159159 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:42.186290 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:42.214362 1157416 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:42.241240 1157416 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:42.260511 1157416 ssh_runner.go:195] Run: openssl version
	I0318 13:49:42.267047 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:42.278582 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283566 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.283609 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:42.289658 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:42.300954 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:42.312828 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319182 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.319251 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:42.325767 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:42.337544 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:42.349053 1157416 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354197 1157416 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.354249 1157416 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:42.361200 1157416 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:42.374825 1157416 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:42.380098 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:42.387161 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:42.393702 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:42.400193 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:42.406243 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:42.412423 1157416 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:42.418599 1157416 kubeadm.go:391] StartCluster: {Name:no-preload-537236 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-537236 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:42.418747 1157416 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:42.418785 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.468980 1157416 cri.go:89] found id: ""
	I0318 13:49:42.469088 1157416 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:42.481101 1157416 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:42.481130 1157416 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:42.481137 1157416 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:42.481190 1157416 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:42.493014 1157416 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:42.494041 1157416 kubeconfig.go:125] found "no-preload-537236" server: "https://192.168.39.7:8443"
	I0318 13:49:42.496519 1157416 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:42.507415 1157416 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.7
	I0318 13:49:42.507448 1157416 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:42.507460 1157416 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:42.507513 1157416 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:42.554791 1157416 cri.go:89] found id: ""
	I0318 13:49:42.554859 1157416 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:42.574054 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:42.584928 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:42.584955 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:42.585009 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:42.594987 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:42.595045 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:42.605058 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:42.614968 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:42.615042 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:42.625169 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.634838 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:42.634905 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:42.644785 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:42.654196 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:42.654254 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:42.663757 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:42.673956 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:42.792913 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:43.799012 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.006050828s)
	I0318 13:49:43.799075 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.061808 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.189349 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:44.329800 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:44.329897 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:44.829990 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:43.007024 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:49:43.033952 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0318 13:49:43.060218 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:49:43.086087 1157708 provision.go:87] duration metric: took 398.861833ms to configureAuth
	I0318 13:49:43.086116 1157708 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:49:43.086326 1157708 config.go:182] Loaded profile config "old-k8s-version-909137": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0318 13:49:43.086442 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.089200 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089534 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.089562 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.089758 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.089965 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090134 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.090286 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.090501 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.090718 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.090744 1157708 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:49:43.401681 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:49:43.401715 1157708 machine.go:97] duration metric: took 1.084258164s to provisionDockerMachine
	I0318 13:49:43.401728 1157708 start.go:293] postStartSetup for "old-k8s-version-909137" (driver="kvm2")
	I0318 13:49:43.401739 1157708 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:49:43.401759 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.402073 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:49:43.402116 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.404775 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405164 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.405192 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.405335 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.405525 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.405740 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.405884 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.493000 1157708 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:49:43.497705 1157708 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:49:43.497740 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:49:43.497818 1157708 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:49:43.497931 1157708 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:49:43.498058 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:49:43.509185 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:43.535401 1157708 start.go:296] duration metric: took 133.657179ms for postStartSetup
	I0318 13:49:43.535454 1157708 fix.go:56] duration metric: took 20.033670705s for fixHost
	I0318 13:49:43.535482 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.538464 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.538964 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.538998 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.539178 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.539386 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539528 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.539702 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.539899 1157708 main.go:141] libmachine: Using SSH client type: native
	I0318 13:49:43.540120 1157708 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.135 22 <nil> <nil>}
	I0318 13:49:43.540133 1157708 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:49:43.649578 1157708 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769783.596310102
	
	I0318 13:49:43.649610 1157708 fix.go:216] guest clock: 1710769783.596310102
	I0318 13:49:43.649621 1157708 fix.go:229] Guest: 2024-03-18 13:49:43.596310102 +0000 UTC Remote: 2024-03-18 13:49:43.535459129 +0000 UTC m=+270.592972067 (delta=60.850973ms)
	I0318 13:49:43.649656 1157708 fix.go:200] guest clock delta is within tolerance: 60.850973ms
	I0318 13:49:43.649663 1157708 start.go:83] releasing machines lock for "old-k8s-version-909137", held for 20.147918331s
	I0318 13:49:43.649689 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.650002 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:43.652712 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653114 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.653148 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.653278 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.653873 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654112 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .DriverName
	I0318 13:49:43.654198 1157708 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:49:43.654264 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.654333 1157708 ssh_runner.go:195] Run: cat /version.json
	I0318 13:49:43.654369 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHHostname
	I0318 13:49:43.657281 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657390 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657741 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657811 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.657830 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.657855 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:43.657918 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:43.658016 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658065 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHPort
	I0318 13:49:43.658199 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658245 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHKeyPath
	I0318 13:49:43.658326 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.658411 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetSSHUsername
	I0318 13:49:43.658574 1157708 sshutil.go:53] new ssh client: &{IP:192.168.72.135 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/old-k8s-version-909137/id_rsa Username:docker}
	I0318 13:49:43.737787 1157708 ssh_runner.go:195] Run: systemctl --version
	I0318 13:49:43.769157 1157708 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:49:43.920376 1157708 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:49:43.928165 1157708 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:49:43.928253 1157708 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:49:43.946102 1157708 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:49:43.946133 1157708 start.go:494] detecting cgroup driver to use...
	I0318 13:49:43.946210 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:49:43.963482 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:49:43.978540 1157708 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:49:43.978613 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:49:43.999525 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:49:44.021242 1157708 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:49:44.198165 1157708 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:49:44.363408 1157708 docker.go:233] disabling docker service ...
	I0318 13:49:44.363474 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:49:44.383527 1157708 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:49:44.398888 1157708 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:49:44.547711 1157708 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:49:44.662762 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:49:44.678786 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:49:44.702931 1157708 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0318 13:49:44.703004 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.721453 1157708 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:49:44.721519 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.739487 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.757379 1157708 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:49:44.777508 1157708 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:49:44.798788 1157708 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:49:44.814280 1157708 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:49:44.814383 1157708 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:49:44.836507 1157708 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:49:44.852614 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:44.994352 1157708 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:49:45.184815 1157708 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:49:45.184907 1157708 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:49:45.190649 1157708 start.go:562] Will wait 60s for crictl version
	I0318 13:49:45.190724 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:45.195265 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:49:45.242737 1157708 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:49:45.242850 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.288154 1157708 ssh_runner.go:195] Run: crio --version
	I0318 13:49:45.331441 1157708 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0318 13:49:43.675531 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Start
	I0318 13:49:43.675763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring networks are active...
	I0318 13:49:43.676642 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network default is active
	I0318 13:49:43.677014 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Ensuring network mk-default-k8s-diff-port-569210 is active
	I0318 13:49:43.677510 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Getting domain xml...
	I0318 13:49:43.678319 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Creating domain...
	I0318 13:49:45.002977 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting to get IP...
	I0318 13:49:45.003870 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004406 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.004499 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.004392 1158648 retry.go:31] will retry after 294.950888ms: waiting for machine to come up
	I0318 13:49:45.301264 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301835 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.301863 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.301747 1158648 retry.go:31] will retry after 291.810051ms: waiting for machine to come up
	I0318 13:49:45.595571 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596720 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.596832 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.596786 1158648 retry.go:31] will retry after 390.232445ms: waiting for machine to come up
	I0318 13:49:45.988661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:45.989534 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:45.989393 1158648 retry.go:31] will retry after 487.148784ms: waiting for machine to come up
	I0318 13:49:46.477982 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478667 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.478701 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.478600 1158648 retry.go:31] will retry after 474.795485ms: waiting for machine to come up
	I0318 13:49:45.332975 1157708 main.go:141] libmachine: (old-k8s-version-909137) Calling .GetIP
	I0318 13:49:45.336274 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336701 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:c0:cb", ip: ""} in network mk-old-k8s-version-909137: {Iface:virbr4 ExpiryTime:2024-03-18 14:39:00 +0000 UTC Type:0 Mac:52:54:00:58:c0:cb Iaid: IPaddr:192.168.72.135 Prefix:24 Hostname:old-k8s-version-909137 Clientid:01:52:54:00:58:c0:cb}
	I0318 13:49:45.336753 1157708 main.go:141] libmachine: (old-k8s-version-909137) DBG | domain old-k8s-version-909137 has defined IP address 192.168.72.135 and MAC address 52:54:00:58:c0:cb in network mk-old-k8s-version-909137
	I0318 13:49:45.336985 1157708 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0318 13:49:45.343147 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:45.361840 1157708 kubeadm.go:877] updating cluster {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:49:45.361982 1157708 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 13:49:45.362040 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:45.419490 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:45.419587 1157708 ssh_runner.go:195] Run: which lz4
	I0318 13:49:45.424689 1157708 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:49:45.431110 1157708 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:49:45.431155 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0318 13:49:47.510385 1157708 crio.go:444] duration metric: took 2.085724633s to copy over tarball
	I0318 13:49:47.510483 1157708 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:49:45.330925 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:45.364854 1157416 api_server.go:72] duration metric: took 1.035057096s to wait for apiserver process to appear ...
	I0318 13:49:45.364883 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:49:45.364927 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:45.365577 1157416 api_server.go:269] stopped: https://192.168.39.7:8443/healthz: Get "https://192.168.39.7:8443/healthz": dial tcp 192.168.39.7:8443: connect: connection refused
	I0318 13:49:45.865126 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.135799 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.135840 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.135862 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.154112 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:49:49.154142 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:49:49.365566 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.375812 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.375862 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:49.865027 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:49.873132 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:49.873176 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.365178 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.371461 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.371506 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:50.865038 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:50.870329 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:49:50.870383 1157416 api_server.go:103] status: https://192.168.39.7:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:49:51.365030 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:49:51.370284 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:49:51.379599 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:49:51.379633 1157416 api_server.go:131] duration metric: took 6.014741397s to wait for apiserver health ...
	I0318 13:49:51.379645 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:49:51.379654 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:51.582399 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:49:46.955128 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955620 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:46.955649 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:46.955579 1158648 retry.go:31] will retry after 817.278037ms: waiting for machine to come up
	I0318 13:49:47.774954 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775449 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:47.775480 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:47.775391 1158648 retry.go:31] will retry after 1.032655883s: waiting for machine to come up
	I0318 13:49:48.810156 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810699 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:48.810730 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:48.810644 1158648 retry.go:31] will retry after 1.1441145s: waiting for machine to come up
	I0318 13:49:49.956702 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957179 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:49.957214 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:49.957105 1158648 retry.go:31] will retry after 1.428592019s: waiting for machine to come up
	I0318 13:49:51.387025 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387627 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:51.387660 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:51.387555 1158648 retry.go:31] will retry after 2.266795202s: waiting for machine to come up
	I0318 13:49:50.947045 1157708 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.436514023s)
	I0318 13:49:50.947084 1157708 crio.go:451] duration metric: took 3.436661543s to extract the tarball
	I0318 13:49:50.947095 1157708 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:49:51.007406 1157708 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:49:51.048060 1157708 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0318 13:49:51.048091 1157708 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0318 13:49:51.048181 1157708 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.048228 1157708 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.048287 1157708 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.048346 1157708 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0318 13:49:51.048398 1157708 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.048432 1157708 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.048232 1157708 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.048183 1157708 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.049960 1157708 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.050268 1157708 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:51.050288 1157708 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.050355 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.050594 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.050627 1157708 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0318 13:49:51.050584 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.051230 1157708 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.219906 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.220734 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.235283 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.236445 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.246700 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0318 13:49:51.251299 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.311054 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.311292 1157708 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0318 13:49:51.311336 1157708 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.311389 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.343594 1157708 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0318 13:49:51.343649 1157708 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.343739 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.391608 1157708 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0318 13:49:51.391657 1157708 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.391706 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.448987 1157708 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0318 13:49:51.449029 1157708 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0318 13:49:51.449058 1157708 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.449061 1157708 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0318 13:49:51.449088 1157708 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.449103 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449035 1157708 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0318 13:49:51.449135 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0318 13:49:51.449178 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449207 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0318 13:49:51.449245 1157708 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0318 13:49:51.449267 1157708 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.449317 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449210 1157708 ssh_runner.go:195] Run: which crictl
	I0318 13:49:51.449223 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0318 13:49:51.469614 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0318 13:49:51.469613 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0318 13:49:51.562455 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0318 13:49:51.562506 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0318 13:49:51.564170 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0318 13:49:51.564269 1157708 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0318 13:49:51.578471 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0318 13:49:51.615689 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0318 13:49:51.615708 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0318 13:49:51.657287 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0318 13:49:51.657361 1157708 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0318 13:49:51.956746 1157708 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:49:52.106933 1157708 cache_images.go:92] duration metric: took 1.058823514s to LoadCachedImages
	W0318 13:49:52.107046 1157708 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0318 13:49:52.107064 1157708 kubeadm.go:928] updating node { 192.168.72.135 8443 v1.20.0 crio true true} ...
	I0318 13:49:52.107259 1157708 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-909137 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.135
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:49:52.107348 1157708 ssh_runner.go:195] Run: crio config
	I0318 13:49:52.163493 1157708 cni.go:84] Creating CNI manager for ""
	I0318 13:49:52.163526 1157708 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:49:52.163546 1157708 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:49:52.163572 1157708 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.135 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-909137 NodeName:old-k8s-version-909137 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.135"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.135 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0318 13:49:52.163740 1157708 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.135
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-909137"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.135
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.135"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:49:52.163818 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0318 13:49:52.175668 1157708 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:49:52.175740 1157708 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:49:52.186745 1157708 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0318 13:49:52.209877 1157708 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:49:52.232921 1157708 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0318 13:49:52.256571 1157708 ssh_runner.go:195] Run: grep 192.168.72.135	control-plane.minikube.internal$ /etc/hosts
	I0318 13:49:52.262776 1157708 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.135	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:49:52.278435 1157708 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:49:52.422705 1157708 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:49:52.443710 1157708 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137 for IP: 192.168.72.135
	I0318 13:49:52.443740 1157708 certs.go:194] generating shared ca certs ...
	I0318 13:49:52.443760 1157708 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:52.443951 1157708 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:49:52.444009 1157708 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:49:52.444023 1157708 certs.go:256] generating profile certs ...
	I0318 13:49:52.444155 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/client.key
	I0318 13:49:52.444239 1157708 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key.e9806bd6
	I0318 13:49:52.444303 1157708 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key
	I0318 13:49:52.444492 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:49:52.444532 1157708 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:49:52.444548 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:49:52.444585 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:49:52.444633 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:49:52.444672 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:49:52.444729 1157708 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:49:52.445363 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:49:52.506720 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:49:52.550057 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:49:52.586845 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:49:52.627933 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0318 13:49:52.681479 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0318 13:49:52.722052 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:49:52.755021 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/old-k8s-version-909137/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:49:52.782181 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:49:52.808269 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:49:52.835041 1157708 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:49:52.863776 1157708 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:49:52.883579 1157708 ssh_runner.go:195] Run: openssl version
	I0318 13:49:52.889846 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:49:52.902288 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908241 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.908302 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:49:52.915392 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:49:52.928374 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:49:52.941444 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946463 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.946514 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:49:52.953447 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:49:52.966231 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:49:52.977986 1157708 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982748 1157708 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.982809 1157708 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:49:52.988715 1157708 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:49:51.626774 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:49:51.642685 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:49:51.669902 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:49:51.759474 1157416 system_pods.go:59] 8 kube-system pods found
	I0318 13:49:51.759519 1157416 system_pods.go:61] "coredns-76f75df574-kxzfm" [d0aad76d-f135-4d4a-a2f5-117707b4b2f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:49:51.759530 1157416 system_pods.go:61] "etcd-no-preload-537236" [d02ad01c-1b16-4b97-be18-237b1cbfe3aa] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:49:51.759539 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [00b05050-229b-47f4-9af2-12be1711200a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:49:51.759548 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [3e7b86df-4111-4bd9-8925-a22cf12e10ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:49:51.759552 1157416 system_pods.go:61] "kube-proxy-5dspp" [adee19a0-eeb6-438f-a55d-30f1e1d87ef6] Running
	I0318 13:49:51.759557 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [17628d51-80f5-4985-8ddb-151cab8f8c5d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:49:51.759562 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-hhh5m" [282de489-beee-47a9-bd29-5da43cf70146] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:49:51.759565 1157416 system_pods.go:61] "storage-provisioner" [97d3de68-0863-4bba-9cb1-2ce98d791935] Running
	I0318 13:49:51.759578 1157416 system_pods.go:74] duration metric: took 89.654007ms to wait for pod list to return data ...
	I0318 13:49:51.759591 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:49:51.764164 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:49:51.764191 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:49:51.764204 1157416 node_conditions.go:105] duration metric: took 4.607295ms to run NodePressure ...
	I0318 13:49:51.764227 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:52.645812 1157416 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653573 1157416 kubeadm.go:733] kubelet initialised
	I0318 13:49:52.653602 1157416 kubeadm.go:734] duration metric: took 7.75557ms waiting for restarted kubelet to initialise ...
	I0318 13:49:52.653614 1157416 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:49:52.662179 1157416 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:54.678656 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:53.656476 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656913 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:53.656943 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:53.656870 1158648 retry.go:31] will retry after 2.341702781s: waiting for machine to come up
	I0318 13:49:56.001662 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002163 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:56.002188 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:56.002106 1158648 retry.go:31] will retry after 2.885262489s: waiting for machine to come up
	I0318 13:49:53.000141 1157708 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:49:53.005021 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:49:53.011156 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:49:53.018329 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:49:53.025687 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:49:53.032199 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:49:53.039048 1157708 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:49:53.045789 1157708 kubeadm.go:391] StartCluster: {Name:old-k8s-version-909137 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-909137 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.135 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:49:53.045882 1157708 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:49:53.045931 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.085682 1157708 cri.go:89] found id: ""
	I0318 13:49:53.085788 1157708 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:49:53.098063 1157708 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:49:53.098091 1157708 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:49:53.098098 1157708 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:49:53.098153 1157708 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:49:53.109692 1157708 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:49:53.110853 1157708 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-909137" does not appear in /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:49:53.111862 1157708 kubeconfig.go:62] /home/jenkins/minikube-integration/18429-1106816/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-909137" cluster setting kubeconfig missing "old-k8s-version-909137" context setting]
	I0318 13:49:53.113334 1157708 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:49:53.115135 1157708 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:49:53.125910 1157708 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.135
	I0318 13:49:53.125949 1157708 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:49:53.125965 1157708 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:49:53.126029 1157708 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:49:53.172181 1157708 cri.go:89] found id: ""
	I0318 13:49:53.172268 1157708 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:49:53.189585 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:49:53.200744 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:49:53.200768 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:49:53.200811 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:49:53.211176 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:49:53.211250 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:49:53.221744 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:49:53.231342 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:49:53.231404 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:49:53.242162 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.252408 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:49:53.252480 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:49:53.262690 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:49:53.272829 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:49:53.272903 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:49:53.283287 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:49:53.294124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:53.437482 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.297415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.588919 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.758204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:49:54.863030 1157708 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:49:54.863140 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.363708 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:55.863301 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.364064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:56.863896 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.363240 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:57.212652 1157416 pod_ready.go:102] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"False"
	I0318 13:49:57.669562 1157416 pod_ready.go:92] pod "coredns-76f75df574-kxzfm" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:57.669584 1157416 pod_ready.go:81] duration metric: took 5.007366512s for pod "coredns-76f75df574-kxzfm" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:57.669597 1157416 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176528 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:49:58.176557 1157416 pod_ready.go:81] duration metric: took 506.95201ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.176570 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:49:58.888400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888706 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | unable to find current IP address of domain default-k8s-diff-port-569210 in network mk-default-k8s-diff-port-569210
	I0318 13:49:58.888742 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | I0318 13:49:58.888681 1158648 retry.go:31] will retry after 4.094701536s: waiting for machine to come up
	I0318 13:49:58.363294 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:58.864051 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.363586 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:49:59.863802 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.363862 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:00.864277 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.363381 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:01.864307 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.363278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:02.863315 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.309987 1157263 start.go:364] duration metric: took 57.988518292s to acquireMachinesLock for "embed-certs-173036"
	I0318 13:50:04.310046 1157263 start.go:96] Skipping create...Using existing machine configuration
	I0318 13:50:04.310062 1157263 fix.go:54] fixHost starting: 
	I0318 13:50:04.310469 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:50:04.310506 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:50:04.330585 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41957
	I0318 13:50:04.331049 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:50:04.331648 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:50:04.331684 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:50:04.332066 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:50:04.332316 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:04.332513 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:50:04.334091 1157263 fix.go:112] recreateIfNeeded on embed-certs-173036: state=Stopped err=<nil>
	I0318 13:50:04.334117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	W0318 13:50:04.334299 1157263 fix.go:138] unexpected machine state, will restart: <nil>
	I0318 13:50:04.336146 1157263 out.go:177] * Restarting existing kvm2 VM for "embed-certs-173036" ...
	I0318 13:50:00.184168 1157416 pod_ready.go:102] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:01.183846 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:01.183872 1157416 pod_ready.go:81] duration metric: took 3.007292631s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:01.183884 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:03.206725 1157416 pod_ready.go:102] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:04.691357 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.691391 1157416 pod_ready.go:81] duration metric: took 3.507497259s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.691410 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696593 1157416 pod_ready.go:92] pod "kube-proxy-5dspp" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.696618 1157416 pod_ready.go:81] duration metric: took 5.198628ms for pod "kube-proxy-5dspp" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.696627 1157416 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.700977 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:04.700995 1157416 pod_ready.go:81] duration metric: took 4.36095ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:04.701006 1157416 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:02.985340 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985804 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has current primary IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.985818 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Found IP for machine: 192.168.61.3
	I0318 13:50:02.985828 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserving static IP address...
	I0318 13:50:02.986233 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.986292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | skip adding static IP to network mk-default-k8s-diff-port-569210 - found existing host DHCP lease matching {name: "default-k8s-diff-port-569210", mac: "52:54:00:4d:48:26", ip: "192.168.61.3"}
	I0318 13:50:02.986307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Reserved static IP address: 192.168.61.3
	I0318 13:50:02.986321 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Waiting for SSH to be available...
	I0318 13:50:02.986337 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Getting to WaitForSSH function...
	I0318 13:50:02.988609 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.988962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:02.988995 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:02.989209 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH client type: external
	I0318 13:50:02.989235 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa (-rw-------)
	I0318 13:50:02.989272 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.3 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:02.989293 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | About to run SSH command:
	I0318 13:50:02.989306 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | exit 0
	I0318 13:50:03.112557 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:03.112907 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetConfigRaw
	I0318 13:50:03.113605 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.116140 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116569 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.116599 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.116858 1157887 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/config.json ...
	I0318 13:50:03.117065 1157887 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:03.117091 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:03.117296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.119506 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.119861 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.119891 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.120015 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.120212 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120429 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.120608 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.120798 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.120995 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.121010 1157887 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:03.221645 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:03.221693 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.221990 1157887 buildroot.go:166] provisioning hostname "default-k8s-diff-port-569210"
	I0318 13:50:03.222027 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.222257 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.225134 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225543 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.225568 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.225714 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.226022 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226225 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.226400 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.226595 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.226870 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.226893 1157887 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-569210 && echo "default-k8s-diff-port-569210" | sudo tee /etc/hostname
	I0318 13:50:03.350362 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-569210
	
	I0318 13:50:03.350398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.353307 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353700 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.353737 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.353911 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.354111 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354283 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.354413 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.354600 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.354805 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.354824 1157887 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-569210' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-569210/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-569210' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:03.471084 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:03.471120 1157887 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:03.471159 1157887 buildroot.go:174] setting up certificates
	I0318 13:50:03.471229 1157887 provision.go:84] configureAuth start
	I0318 13:50:03.471247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetMachineName
	I0318 13:50:03.471576 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:03.474528 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.474918 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.474957 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.475210 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.477624 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.477910 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.477936 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.478118 1157887 provision.go:143] copyHostCerts
	I0318 13:50:03.478196 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:03.478213 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:03.478281 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:03.478424 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:03.478437 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:03.478466 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:03.478537 1157887 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:03.478548 1157887 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:03.478571 1157887 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:03.478640 1157887 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-569210 san=[127.0.0.1 192.168.61.3 default-k8s-diff-port-569210 localhost minikube]
	I0318 13:50:03.600956 1157887 provision.go:177] copyRemoteCerts
	I0318 13:50:03.601028 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:03.601058 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.603986 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604437 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.604468 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.604659 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.604922 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.605086 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.605260 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:03.688256 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0318 13:50:03.716748 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:03.744848 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:03.771601 1157887 provision.go:87] duration metric: took 300.358039ms to configureAuth
	I0318 13:50:03.771631 1157887 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:03.771893 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:03.771992 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:03.774410 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774725 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:03.774760 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:03.774926 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:03.775099 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775292 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:03.775456 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:03.775642 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:03.775872 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:03.775901 1157887 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:04.068202 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:04.068242 1157887 machine.go:97] duration metric: took 951.160051ms to provisionDockerMachine
	I0318 13:50:04.068259 1157887 start.go:293] postStartSetup for "default-k8s-diff-port-569210" (driver="kvm2")
	I0318 13:50:04.068277 1157887 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:04.068303 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.068677 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:04.068712 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.071619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.071974 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.072002 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.072148 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.072354 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.072519 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.072639 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.157469 1157887 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:04.162629 1157887 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:04.162655 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:04.162719 1157887 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:04.162810 1157887 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:04.162911 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:04.173898 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:04.204771 1157887 start.go:296] duration metric: took 136.495479ms for postStartSetup
	I0318 13:50:04.204814 1157887 fix.go:56] duration metric: took 20.554947186s for fixHost
	I0318 13:50:04.204839 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.207619 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.207923 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.207951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.208088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.208296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208509 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.208657 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.208801 1157887 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:04.208975 1157887 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.3 22 <nil> <nil>}
	I0318 13:50:04.208988 1157887 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:04.309828 1157887 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769804.283357411
	
	I0318 13:50:04.309861 1157887 fix.go:216] guest clock: 1710769804.283357411
	I0318 13:50:04.309871 1157887 fix.go:229] Guest: 2024-03-18 13:50:04.283357411 +0000 UTC Remote: 2024-03-18 13:50:04.204818975 +0000 UTC m=+262.583280441 (delta=78.538436ms)
	I0318 13:50:04.309898 1157887 fix.go:200] guest clock delta is within tolerance: 78.538436ms
	I0318 13:50:04.309904 1157887 start.go:83] releasing machines lock for "default-k8s-diff-port-569210", held for 20.660081187s
	I0318 13:50:04.309933 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.310247 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:04.313302 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313747 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.313777 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.313956 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314591 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314792 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:50:04.314878 1157887 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:04.314934 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.315014 1157887 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:04.315059 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:50:04.318021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318056 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318438 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318474 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318500 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:04.318518 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:04.318661 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318763 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:50:04.318879 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.318962 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:50:04.319052 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319110 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:50:04.319229 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.319286 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:50:04.426710 1157887 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:04.433482 1157887 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:04.590331 1157887 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:04.598896 1157887 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:04.598974 1157887 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:04.617060 1157887 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:04.617095 1157887 start.go:494] detecting cgroup driver to use...
	I0318 13:50:04.617190 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:04.633902 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:04.648705 1157887 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:04.648759 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:04.665516 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:04.681326 1157887 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:04.798310 1157887 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:04.972066 1157887 docker.go:233] disabling docker service ...
	I0318 13:50:04.972133 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:04.995498 1157887 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:05.014901 1157887 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:05.158158 1157887 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:05.309791 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:05.324965 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:05.346489 1157887 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:05.346595 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.358753 1157887 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:05.358823 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.374416 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.394228 1157887 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:05.406975 1157887 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:05.420201 1157887 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:05.432405 1157887 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:05.432479 1157887 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:05.449386 1157887 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:05.461081 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:05.607102 1157887 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:05.776152 1157887 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:05.776267 1157887 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:05.782168 1157887 start.go:562] Will wait 60s for crictl version
	I0318 13:50:05.782247 1157887 ssh_runner.go:195] Run: which crictl
	I0318 13:50:05.787932 1157887 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:05.831304 1157887 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:05.831399 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.865410 1157887 ssh_runner.go:195] Run: crio --version
	I0318 13:50:05.908406 1157887 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:05.909651 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetIP
	I0318 13:50:05.912855 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913213 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:50:05.913256 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:50:05.913470 1157887 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:05.918362 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:05.933755 1157887 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:05.933926 1157887 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:05.934002 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:05.978920 1157887 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:05.978998 1157887 ssh_runner.go:195] Run: which lz4
	I0318 13:50:05.983751 1157887 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:05.988862 1157887 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:05.988895 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:03.363591 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:03.864049 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.363310 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.863306 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.363706 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:05.863618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.364183 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:06.863776 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:07.863261 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:04.337631 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Start
	I0318 13:50:04.337838 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring networks are active...
	I0318 13:50:04.338615 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network default is active
	I0318 13:50:04.338978 1157263 main.go:141] libmachine: (embed-certs-173036) Ensuring network mk-embed-certs-173036 is active
	I0318 13:50:04.339444 1157263 main.go:141] libmachine: (embed-certs-173036) Getting domain xml...
	I0318 13:50:04.340295 1157263 main.go:141] libmachine: (embed-certs-173036) Creating domain...
	I0318 13:50:05.616437 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting to get IP...
	I0318 13:50:05.617646 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.618096 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.618168 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.618075 1158806 retry.go:31] will retry after 234.69885ms: waiting for machine to come up
	I0318 13:50:05.854749 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:05.855365 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:05.855401 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:05.855310 1158806 retry.go:31] will retry after 324.015594ms: waiting for machine to come up
	I0318 13:50:06.181178 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.182089 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.182123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.182038 1158806 retry.go:31] will retry after 456.172304ms: waiting for machine to come up
	I0318 13:50:06.639827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:06.640288 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:06.640349 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:06.640244 1158806 retry.go:31] will retry after 561.082549ms: waiting for machine to come up
	I0318 13:50:07.203208 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.203798 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.203825 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.203696 1158806 retry.go:31] will retry after 633.905437ms: waiting for machine to come up
	I0318 13:50:07.839205 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:07.839760 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:07.839792 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:07.839698 1158806 retry.go:31] will retry after 629.254629ms: waiting for machine to come up
	I0318 13:50:08.470625 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:08.471073 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:08.471105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:08.471021 1158806 retry.go:31] will retry after 771.526268ms: waiting for machine to come up
	I0318 13:50:06.709604 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:09.208197 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:08.056220 1157887 crio.go:444] duration metric: took 2.072501191s to copy over tarball
	I0318 13:50:08.056361 1157887 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:10.763501 1157887 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.707101479s)
	I0318 13:50:10.763560 1157887 crio.go:451] duration metric: took 2.707303654s to extract the tarball
	I0318 13:50:10.763570 1157887 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:10.808643 1157887 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:10.860178 1157887 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:10.860218 1157887 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:10.860229 1157887 kubeadm.go:928] updating node { 192.168.61.3 8444 v1.28.4 crio true true} ...
	I0318 13:50:10.860381 1157887 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-569210 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:10.860455 1157887 ssh_runner.go:195] Run: crio config
	I0318 13:50:10.918077 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:10.918109 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:10.918124 1157887 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:10.918154 1157887 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.3 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-569210 NodeName:default-k8s-diff-port-569210 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.3"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:10.918362 1157887 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.3
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-569210"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.3"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:10.918457 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:10.930573 1157887 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:10.930639 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:10.941181 1157887 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I0318 13:50:10.960048 1157887 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:10.980367 1157887 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0318 13:50:11.001607 1157887 ssh_runner.go:195] Run: grep 192.168.61.3	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:11.006363 1157887 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.3	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:11.020871 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:11.164152 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:11.185025 1157887 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210 for IP: 192.168.61.3
	I0318 13:50:11.185060 1157887 certs.go:194] generating shared ca certs ...
	I0318 13:50:11.185096 1157887 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:11.185277 1157887 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:11.185342 1157887 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:11.185356 1157887 certs.go:256] generating profile certs ...
	I0318 13:50:11.185464 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/client.key
	I0318 13:50:11.185541 1157887 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key.e15332a5
	I0318 13:50:11.185590 1157887 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key
	I0318 13:50:11.185757 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:11.185799 1157887 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:11.185812 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:11.185841 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:11.185899 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:11.185945 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:11.185999 1157887 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:11.186853 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:11.221967 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:11.250180 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:11.287449 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:11.323521 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0318 13:50:11.360286 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:11.396947 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:11.426116 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/default-k8s-diff-port-569210/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0318 13:50:11.455183 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:11.483479 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:11.512975 1157887 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:11.548393 1157887 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:11.569155 1157887 ssh_runner.go:195] Run: openssl version
	I0318 13:50:11.576084 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:11.589110 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594640 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.594736 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:11.601473 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:11.615874 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:11.630380 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635808 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.635886 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:11.644465 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:11.661509 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:08.364243 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:08.863539 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.364037 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.863621 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:10.863422 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.363353 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:11.863485 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.363548 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:12.864070 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:09.243731 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:09.244146 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:09.244180 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:09.244104 1158806 retry.go:31] will retry after 1.160252016s: waiting for machine to come up
	I0318 13:50:10.405805 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:10.406270 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:10.406310 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:10.406201 1158806 retry.go:31] will retry after 1.625913099s: waiting for machine to come up
	I0318 13:50:12.033202 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:12.033674 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:12.033712 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:12.033589 1158806 retry.go:31] will retry after 1.835793865s: waiting for machine to come up
	I0318 13:50:11.211241 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:13.710211 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:11.675340 1157887 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938009 1157887 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.938089 1157887 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:11.944766 1157887 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:11.957959 1157887 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:11.963524 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:11.971678 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:11.978601 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:11.985403 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:11.992159 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:11.998620 1157887 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:12.005209 1157887 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-569210 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-569210 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:12.005300 1157887 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:12.005350 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.074518 1157887 cri.go:89] found id: ""
	I0318 13:50:12.074603 1157887 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:12.099031 1157887 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:12.099062 1157887 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:12.099070 1157887 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:12.099147 1157887 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:12.111133 1157887 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:12.112779 1157887 kubeconfig.go:125] found "default-k8s-diff-port-569210" server: "https://192.168.61.3:8444"
	I0318 13:50:12.116521 1157887 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:12.134902 1157887 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.3
	I0318 13:50:12.134964 1157887 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:12.135005 1157887 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:12.135086 1157887 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:12.190100 1157887 cri.go:89] found id: ""
	I0318 13:50:12.190182 1157887 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:12.211556 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:12.223095 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:12.223120 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:12.223173 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:50:12.235709 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:12.235780 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:12.248896 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:50:12.260212 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:12.260285 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:12.271424 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.283083 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:12.283143 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:12.294877 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:50:12.305629 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:12.305692 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:12.317395 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:12.328943 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:12.471901 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.400723 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.601149 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.677768 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:13.796413 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:13.796558 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.297639 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.797236 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.885767 1157887 api_server.go:72] duration metric: took 1.089353166s to wait for apiserver process to appear ...
	I0318 13:50:14.885801 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:14.885827 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:14.886464 1157887 api_server.go:269] stopped: https://192.168.61.3:8444/healthz: Get "https://192.168.61.3:8444/healthz": dial tcp 192.168.61.3:8444: connect: connection refused
	I0318 13:50:15.386913 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:13.364111 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.863871 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.363958 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:14.863570 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.364185 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:15.863974 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.364010 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:16.863484 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:17.864149 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:13.871003 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:13.871443 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:13.871475 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:13.871398 1158806 retry.go:31] will retry after 2.53403994s: waiting for machine to come up
	I0318 13:50:16.407271 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:16.407728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:16.407775 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:16.407708 1158806 retry.go:31] will retry after 2.371916928s: waiting for machine to come up
	I0318 13:50:18.781468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:18.781866 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:18.781898 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:18.781809 1158806 retry.go:31] will retry after 3.250042198s: waiting for machine to come up
	I0318 13:50:17.204788 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.204828 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.204848 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.235957 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:17.236000 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:17.386349 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.393185 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.393220 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:17.886583 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:17.892087 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:17.892122 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.386820 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.406609 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:18.406658 1157887 api_server.go:103] status: https://192.168.61.3:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:18.886458 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:50:18.896097 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:50:18.905565 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:18.905603 1157887 api_server.go:131] duration metric: took 4.019792975s to wait for apiserver health ...
	I0318 13:50:18.905615 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:50:18.905624 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:18.907258 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:15.711910 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.209648 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.909133 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:18.944457 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:18.973831 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:18.984400 1157887 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:18.984436 1157887 system_pods.go:61] "coredns-5dd5756b68-hwsz5" [0a91f20c-3d3b-415c-b709-7898c606d830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:18.984447 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [64925324-9666-49ab-b849-ad9b7ce54891] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:18.984456 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [8409a63f-fbac-4bf9-b54b-5ac267a58206] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:18.984465 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [a2d7b983-c4aa-4c32-9391-babe90b0f102] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:18.984470 1157887 system_pods.go:61] "kube-proxy-v59ks" [39a4e73c-319d-4093-8781-ca7a1a48e005] Running
	I0318 13:50:18.984477 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [f24baa89-e33d-42ca-8f83-17c76a4cedcb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:18.984488 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-2sb4m" [f3e533a7-9666-4b79-b9a9-26222422f242] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:18.984496 1157887 system_pods.go:61] "storage-provisioner" [864d0bb2-cbca-41ae-b9ec-89aced62dd08] Running
	I0318 13:50:18.984505 1157887 system_pods.go:74] duration metric: took 10.646849ms to wait for pod list to return data ...
	I0318 13:50:18.984519 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:18.989173 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:18.989201 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:18.989213 1157887 node_conditions.go:105] duration metric: took 4.685756ms to run NodePressure ...
	I0318 13:50:18.989231 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:19.229166 1157887 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237757 1157887 kubeadm.go:733] kubelet initialised
	I0318 13:50:19.237787 1157887 kubeadm.go:734] duration metric: took 8.591388ms waiting for restarted kubelet to initialise ...
	I0318 13:50:19.237797 1157887 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:19.243530 1157887 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.253925 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253957 1157887 pod_ready.go:81] duration metric: took 10.403116ms for pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.253969 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "coredns-5dd5756b68-hwsz5" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.253978 1157887 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.265167 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265189 1157887 pod_ready.go:81] duration metric: took 11.202545ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.265200 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.265206 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:19.273558 1157887 pod_ready.go:97] node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273589 1157887 pod_ready.go:81] duration metric: took 8.37478ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:19.273603 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-569210" hosting pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-569210" has status "Ready":"False"
	I0318 13:50:19.273615 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:21.280970 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:18.363366 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:18.863782 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.363987 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:19.863437 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.364050 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:20.863961 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.364126 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:21.863264 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.363519 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:22.033540 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:22.034056 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | unable to find current IP address of domain embed-certs-173036 in network mk-embed-certs-173036
	I0318 13:50:22.034084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | I0318 13:50:22.034001 1158806 retry.go:31] will retry after 5.297432528s: waiting for machine to come up
	I0318 13:50:20.708189 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:22.708573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:24.708632 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.281625 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:25.780754 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:23.364019 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:23.864134 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.363510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:24.863263 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.364027 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:25.863203 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.364219 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:26.863262 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.363889 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.864113 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:27.335390 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335875 1157263 main.go:141] libmachine: (embed-certs-173036) Found IP for machine: 192.168.50.191
	I0318 13:50:27.335908 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has current primary IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.335918 1157263 main.go:141] libmachine: (embed-certs-173036) Reserving static IP address...
	I0318 13:50:27.336311 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.336360 1157263 main.go:141] libmachine: (embed-certs-173036) Reserved static IP address: 192.168.50.191
	I0318 13:50:27.336380 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | skip adding static IP to network mk-embed-certs-173036 - found existing host DHCP lease matching {name: "embed-certs-173036", mac: "52:54:00:e1:4f:b1", ip: "192.168.50.191"}
	I0318 13:50:27.336394 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Getting to WaitForSSH function...
	I0318 13:50:27.336406 1157263 main.go:141] libmachine: (embed-certs-173036) Waiting for SSH to be available...
	I0318 13:50:27.338627 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.338948 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.338983 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.339087 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH client type: external
	I0318 13:50:27.339177 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Using SSH private key: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa (-rw-------)
	I0318 13:50:27.339212 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0318 13:50:27.339227 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | About to run SSH command:
	I0318 13:50:27.339244 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | exit 0
	I0318 13:50:27.468468 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | SSH cmd err, output: <nil>: 
	I0318 13:50:27.468936 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetConfigRaw
	I0318 13:50:27.469699 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.472098 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472422 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.472446 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.472714 1157263 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/config.json ...
	I0318 13:50:27.472955 1157263 machine.go:94] provisionDockerMachine start ...
	I0318 13:50:27.472982 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:27.473196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.475516 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.475808 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.475831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.476041 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.476252 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476414 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.476537 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.476719 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.476899 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.476909 1157263 main.go:141] libmachine: About to run SSH command:
	hostname
	I0318 13:50:27.589501 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0318 13:50:27.589532 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.589828 1157263 buildroot.go:166] provisioning hostname "embed-certs-173036"
	I0318 13:50:27.589862 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.590068 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.592650 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593005 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.593035 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.593186 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.593375 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593546 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.593713 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.593883 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.594058 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.594073 1157263 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-173036 && echo "embed-certs-173036" | sudo tee /etc/hostname
	I0318 13:50:27.730406 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-173036
	
	I0318 13:50:27.730437 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.733420 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.733857 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.733890 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.734058 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:27.734271 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734475 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:27.734609 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:27.734764 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:27.734943 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:27.734960 1157263 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-173036' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-173036/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-173036' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0318 13:50:27.860625 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0318 13:50:27.860679 1157263 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18429-1106816/.minikube CaCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18429-1106816/.minikube}
	I0318 13:50:27.860777 1157263 buildroot.go:174] setting up certificates
	I0318 13:50:27.860790 1157263 provision.go:84] configureAuth start
	I0318 13:50:27.860810 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetMachineName
	I0318 13:50:27.861112 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:27.864215 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864667 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.864703 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.864956 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:27.867381 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867690 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:27.867730 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:27.867893 1157263 provision.go:143] copyHostCerts
	I0318 13:50:27.867963 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem, removing ...
	I0318 13:50:27.867977 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem
	I0318 13:50:27.868048 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.pem (1078 bytes)
	I0318 13:50:27.868183 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem, removing ...
	I0318 13:50:27.868198 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem
	I0318 13:50:27.868231 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/cert.pem (1123 bytes)
	I0318 13:50:27.868307 1157263 exec_runner.go:144] found /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem, removing ...
	I0318 13:50:27.868318 1157263 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem
	I0318 13:50:27.868372 1157263 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18429-1106816/.minikube/key.pem (1679 bytes)
	I0318 13:50:27.868451 1157263 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem org=jenkins.embed-certs-173036 san=[127.0.0.1 192.168.50.191 embed-certs-173036 localhost minikube]
	I0318 13:50:28.001671 1157263 provision.go:177] copyRemoteCerts
	I0318 13:50:28.001742 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0318 13:50:28.001773 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.004389 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.004746 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.004777 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.005021 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.005214 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.005393 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.005558 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.095871 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0318 13:50:28.127356 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0318 13:50:28.157301 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0318 13:50:28.186185 1157263 provision.go:87] duration metric: took 325.374328ms to configureAuth
	I0318 13:50:28.186217 1157263 buildroot.go:189] setting minikube options for container-runtime
	I0318 13:50:28.186424 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:50:28.186529 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.189135 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189532 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.189564 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.189719 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.189933 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190127 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.190335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.190492 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.190654 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.190668 1157263 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0318 13:50:28.473836 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0318 13:50:28.473875 1157263 machine.go:97] duration metric: took 1.000902962s to provisionDockerMachine
	I0318 13:50:28.473887 1157263 start.go:293] postStartSetup for "embed-certs-173036" (driver="kvm2")
	I0318 13:50:28.473898 1157263 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0318 13:50:28.473914 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.474270 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0318 13:50:28.474307 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.477165 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477571 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.477619 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.477756 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.477966 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.478135 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.478296 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.568988 1157263 ssh_runner.go:195] Run: cat /etc/os-release
	I0318 13:50:28.573759 1157263 info.go:137] Remote host: Buildroot 2023.02.9
	I0318 13:50:28.573782 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/addons for local assets ...
	I0318 13:50:28.573839 1157263 filesync.go:126] Scanning /home/jenkins/minikube-integration/18429-1106816/.minikube/files for local assets ...
	I0318 13:50:28.573909 1157263 filesync.go:149] local asset: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem -> 11141362.pem in /etc/ssl/certs
	I0318 13:50:28.573989 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0318 13:50:28.584049 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:28.610999 1157263 start.go:296] duration metric: took 137.09711ms for postStartSetup
	I0318 13:50:28.611043 1157263 fix.go:56] duration metric: took 24.300980779s for fixHost
	I0318 13:50:28.611066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.614123 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614582 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.614628 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.614795 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.614999 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615124 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.615255 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.615427 1157263 main.go:141] libmachine: Using SSH client type: native
	I0318 13:50:28.615617 1157263 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.191 22 <nil> <nil>}
	I0318 13:50:28.615631 1157263 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0318 13:50:28.729856 1157263 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710769828.678644307
	
	I0318 13:50:28.729894 1157263 fix.go:216] guest clock: 1710769828.678644307
	I0318 13:50:28.729913 1157263 fix.go:229] Guest: 2024-03-18 13:50:28.678644307 +0000 UTC Remote: 2024-03-18 13:50:28.611048079 +0000 UTC m=+364.845703282 (delta=67.596228ms)
	I0318 13:50:28.729932 1157263 fix.go:200] guest clock delta is within tolerance: 67.596228ms
	I0318 13:50:28.729937 1157263 start.go:83] releasing machines lock for "embed-certs-173036", held for 24.419922158s
	I0318 13:50:28.729958 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.730241 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:28.732831 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733196 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.733249 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.733406 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.733875 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734066 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:50:28.734172 1157263 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0318 13:50:28.734248 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.734330 1157263 ssh_runner.go:195] Run: cat /version.json
	I0318 13:50:28.734376 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:50:28.737014 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737200 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737444 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737470 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737611 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737694 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:28.737721 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:28.737918 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:50:28.737926 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738117 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738195 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:50:28.738292 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:28.738357 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:50:28.738466 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:50:26.708824 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.209974 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:28.818695 1157263 ssh_runner.go:195] Run: systemctl --version
	I0318 13:50:28.844173 1157263 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0318 13:50:28.995017 1157263 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0318 13:50:29.002150 1157263 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0318 13:50:29.002251 1157263 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0318 13:50:29.021165 1157263 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0318 13:50:29.021200 1157263 start.go:494] detecting cgroup driver to use...
	I0318 13:50:29.021286 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0318 13:50:29.039060 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0318 13:50:29.053451 1157263 docker.go:217] disabling cri-docker service (if available) ...
	I0318 13:50:29.053521 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0318 13:50:29.069721 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0318 13:50:29.085285 1157263 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0318 13:50:29.201273 1157263 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0318 13:50:29.356314 1157263 docker.go:233] disabling docker service ...
	I0318 13:50:29.356406 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0318 13:50:29.374159 1157263 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0318 13:50:29.390280 1157263 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0318 13:50:29.542126 1157263 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0318 13:50:29.692068 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0318 13:50:29.707760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0318 13:50:29.735684 1157263 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0318 13:50:29.735753 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.751291 1157263 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0318 13:50:29.751365 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.763159 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.774837 1157263 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0318 13:50:29.787142 1157263 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0318 13:50:29.799773 1157263 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0318 13:50:29.810620 1157263 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0318 13:50:29.810691 1157263 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0318 13:50:29.826816 1157263 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0318 13:50:29.842059 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:29.985531 1157263 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0318 13:50:30.147122 1157263 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0318 13:50:30.147191 1157263 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0318 13:50:30.152406 1157263 start.go:562] Will wait 60s for crictl version
	I0318 13:50:30.152468 1157263 ssh_runner.go:195] Run: which crictl
	I0318 13:50:30.157019 1157263 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0318 13:50:30.199810 1157263 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0318 13:50:30.199889 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.232028 1157263 ssh_runner.go:195] Run: crio --version
	I0318 13:50:30.270484 1157263 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0318 13:50:27.781584 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:29.795969 1157887 pod_ready.go:102] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:31.282868 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.282899 1157887 pod_ready.go:81] duration metric: took 12.009270978s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.282910 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290886 1157887 pod_ready.go:92] pod "kube-proxy-v59ks" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.290917 1157887 pod_ready.go:81] duration metric: took 7.99936ms for pod "kube-proxy-v59ks" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.290931 1157887 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300197 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:31.300235 1157887 pod_ready.go:81] duration metric: took 9.294232ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:31.300254 1157887 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:28.364069 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:28.863405 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.363996 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:29.863574 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.363749 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.863564 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.363250 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:31.863320 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.363894 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:32.864166 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:30.271939 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetIP
	I0318 13:50:30.275084 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.275682 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:50:30.275728 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:50:30.276045 1157263 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0318 13:50:30.282421 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:30.299013 1157263 kubeadm.go:877] updating cluster {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0318 13:50:30.299280 1157263 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 13:50:30.299364 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:30.349617 1157263 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0318 13:50:30.349720 1157263 ssh_runner.go:195] Run: which lz4
	I0318 13:50:30.354659 1157263 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0318 13:50:30.359861 1157263 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0318 13:50:30.359903 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0318 13:50:32.362707 1157263 crio.go:444] duration metric: took 2.008087158s to copy over tarball
	I0318 13:50:32.362796 1157263 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0318 13:50:31.210766 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.709661 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.308081 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:35.309291 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:33.363425 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:33.864021 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.363963 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:34.864011 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.364122 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.863559 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.364154 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:36.863814 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.364232 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:37.863934 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.265803 1157263 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.902966349s)
	I0318 13:50:35.265827 1157263 crio.go:451] duration metric: took 2.903086385s to extract the tarball
	I0318 13:50:35.265835 1157263 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0318 13:50:35.313875 1157263 ssh_runner.go:195] Run: sudo crictl images --output json
	I0318 13:50:35.378361 1157263 crio.go:496] all images are preloaded for cri-o runtime.
	I0318 13:50:35.378392 1157263 cache_images.go:84] Images are preloaded, skipping loading
	I0318 13:50:35.378408 1157263 kubeadm.go:928] updating node { 192.168.50.191 8443 v1.28.4 crio true true} ...
	I0318 13:50:35.378551 1157263 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-173036 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.191
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0318 13:50:35.378648 1157263 ssh_runner.go:195] Run: crio config
	I0318 13:50:35.443472 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:35.443501 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:35.443520 1157263 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0318 13:50:35.443551 1157263 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.191 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-173036 NodeName:embed-certs-173036 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.191"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.191 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0318 13:50:35.443730 1157263 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.191
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-173036"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.191
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.191"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0318 13:50:35.443809 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0318 13:50:35.455284 1157263 binaries.go:44] Found k8s binaries, skipping transfer
	I0318 13:50:35.455352 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0318 13:50:35.465886 1157263 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0318 13:50:35.487345 1157263 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0318 13:50:35.507361 1157263 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0318 13:50:35.528055 1157263 ssh_runner.go:195] Run: grep 192.168.50.191	control-plane.minikube.internal$ /etc/hosts
	I0318 13:50:35.533287 1157263 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.191	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0318 13:50:35.548295 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:50:35.684165 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:50:35.703884 1157263 certs.go:68] Setting up /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036 for IP: 192.168.50.191
	I0318 13:50:35.703910 1157263 certs.go:194] generating shared ca certs ...
	I0318 13:50:35.703927 1157263 certs.go:226] acquiring lock for ca certs: {Name:mka24dbce78b8549c3bcfe7842863cce2dcda207 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:50:35.704117 1157263 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key
	I0318 13:50:35.704186 1157263 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key
	I0318 13:50:35.704200 1157263 certs.go:256] generating profile certs ...
	I0318 13:50:35.704292 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/client.key
	I0318 13:50:35.704406 1157263 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key.527b6b30
	I0318 13:50:35.704472 1157263 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key
	I0318 13:50:35.704637 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem (1338 bytes)
	W0318 13:50:35.704680 1157263 certs.go:480] ignoring /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136_empty.pem, impossibly tiny 0 bytes
	I0318 13:50:35.704694 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca-key.pem (1675 bytes)
	I0318 13:50:35.704729 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/ca.pem (1078 bytes)
	I0318 13:50:35.704763 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/cert.pem (1123 bytes)
	I0318 13:50:35.704796 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/key.pem (1679 bytes)
	I0318 13:50:35.704857 1157263 certs.go:484] found cert: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem (1708 bytes)
	I0318 13:50:35.705836 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0318 13:50:35.768912 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0318 13:50:35.830564 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0318 13:50:35.877813 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0318 13:50:35.916756 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0318 13:50:35.948397 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0318 13:50:35.980450 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0318 13:50:36.009626 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/embed-certs-173036/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0318 13:50:36.040155 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0318 13:50:36.068885 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/certs/1114136.pem --> /usr/share/ca-certificates/1114136.pem (1338 bytes)
	I0318 13:50:36.098638 1157263 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/ssl/certs/11141362.pem --> /usr/share/ca-certificates/11141362.pem (1708 bytes)
	I0318 13:50:36.128423 1157263 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0318 13:50:36.149584 1157263 ssh_runner.go:195] Run: openssl version
	I0318 13:50:36.156347 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0318 13:50:36.169729 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175367 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 18 12:17 /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.175438 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0318 13:50:36.181995 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0318 13:50:36.193987 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1114136.pem && ln -fs /usr/share/ca-certificates/1114136.pem /etc/ssl/certs/1114136.pem"
	I0318 13:50:36.206444 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212355 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 18 12:26 /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.212442 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1114136.pem
	I0318 13:50:36.219042 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1114136.pem /etc/ssl/certs/51391683.0"
	I0318 13:50:36.231882 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11141362.pem && ln -fs /usr/share/ca-certificates/11141362.pem /etc/ssl/certs/11141362.pem"
	I0318 13:50:36.244590 1157263 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250443 1157263 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 18 12:26 /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.250511 1157263 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11141362.pem
	I0318 13:50:36.257713 1157263 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11141362.pem /etc/ssl/certs/3ec20f2e.0"
	I0318 13:50:36.271026 1157263 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0318 13:50:36.276902 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0318 13:50:36.285465 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0318 13:50:36.294274 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0318 13:50:36.302415 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0318 13:50:36.310867 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0318 13:50:36.318931 1157263 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0318 13:50:36.327627 1157263 kubeadm.go:391] StartCluster: {Name:embed-certs-173036 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-173036 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 13:50:36.327781 1157263 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0318 13:50:36.327843 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.376644 1157263 cri.go:89] found id: ""
	I0318 13:50:36.376741 1157263 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0318 13:50:36.389506 1157263 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0318 13:50:36.389528 1157263 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0318 13:50:36.389533 1157263 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0318 13:50:36.389640 1157263 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0318 13:50:36.401386 1157263 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:50:36.402631 1157263 kubeconfig.go:125] found "embed-certs-173036" server: "https://192.168.50.191:8443"
	I0318 13:50:36.404833 1157263 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0318 13:50:36.416975 1157263 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.191
	I0318 13:50:36.417026 1157263 kubeadm.go:1154] stopping kube-system containers ...
	I0318 13:50:36.417041 1157263 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0318 13:50:36.417106 1157263 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0318 13:50:36.458072 1157263 cri.go:89] found id: ""
	I0318 13:50:36.458162 1157263 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0318 13:50:36.476557 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:50:36.487765 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:50:36.487791 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:50:36.487857 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:50:36.498903 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:50:36.498982 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:50:36.510205 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:50:36.520423 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:50:36.520476 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:50:36.531864 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.542058 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:50:36.542131 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:50:36.552807 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:50:36.562840 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:50:36.562915 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:50:36.573581 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:50:36.583760 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:36.719884 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.681007 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.914386 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:37.993967 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:38.101144 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:50:38.101261 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.602138 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:35.711725 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:37.807508 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:39.809153 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:38.363994 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:38.863278 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.363665 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.863948 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.364081 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:40.864124 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.363964 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:41.863593 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.363750 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:42.864002 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.102040 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:39.212769 1157263 api_server.go:72] duration metric: took 1.111626123s to wait for apiserver process to appear ...
	I0318 13:50:39.212807 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:50:39.212840 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:39.213446 1157263 api_server.go:269] stopped: https://192.168.50.191:8443/healthz: Get "https://192.168.50.191:8443/healthz": dial tcp 192.168.50.191:8443: connect: connection refused
	I0318 13:50:39.713482 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.646306 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.646352 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.646370 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.691920 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0318 13:50:42.691953 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0318 13:50:42.713082 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:42.770065 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:42.770101 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.213524 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.224669 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.224710 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:43.712987 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:43.718490 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0318 13:50:43.718533 1157263 api_server.go:103] status: https://192.168.50.191:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0318 13:50:44.213026 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:50:44.217876 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:50:44.225562 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:50:44.225588 1157263 api_server.go:131] duration metric: took 5.012774227s to wait for apiserver health ...
	I0318 13:50:44.225610 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:50:44.225618 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:50:44.227565 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:50:40.210029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:42.210435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:44.710674 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:41.811414 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.818645 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:46.308757 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:43.364189 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:43.863868 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.363454 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.863940 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.363913 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:45.863288 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.363884 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:46.863361 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.363383 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:47.864064 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:44.229055 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:50:44.260389 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:50:44.310001 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:50:44.327281 1157263 system_pods.go:59] 8 kube-system pods found
	I0318 13:50:44.327330 1157263 system_pods.go:61] "coredns-5dd5756b68-zsfvm" [1404c3fe-6538-4aaf-80f5-599275240731] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0318 13:50:44.327342 1157263 system_pods.go:61] "etcd-embed-certs-173036" [254a577c-bd3b-4645-9c92-1479b0c6d0c6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0318 13:50:44.327354 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [5a738280-05ba-413e-a288-4c4d07ddbd7d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0318 13:50:44.327362 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [f48cfb7f-1efe-4941-b328-2358c7a5cced] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0318 13:50:44.327369 1157263 system_pods.go:61] "kube-proxy-xqf68" [969de4e5-fc60-4d46-b336-49f22a9b6c38] Running
	I0318 13:50:44.327376 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [e0579c16-de3e-4915-9ed2-f69b53f6f884] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0318 13:50:44.327385 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-5cv2z" [85649bfb-f91f-4bfe-9356-d540ac3d6a68] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:50:44.327392 1157263 system_pods.go:61] "storage-provisioner" [0c1ec131-0f6c-4e01-aaec-5011f1a4fe75] Running
	I0318 13:50:44.327410 1157263 system_pods.go:74] duration metric: took 17.376754ms to wait for pod list to return data ...
	I0318 13:50:44.327423 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:50:44.332965 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:50:44.332997 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:50:44.333008 1157263 node_conditions.go:105] duration metric: took 5.580934ms to run NodePressure ...
	I0318 13:50:44.333027 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0318 13:50:44.573923 1157263 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578504 1157263 kubeadm.go:733] kubelet initialised
	I0318 13:50:44.578526 1157263 kubeadm.go:734] duration metric: took 4.577181ms waiting for restarted kubelet to initialise ...
	I0318 13:50:44.578534 1157263 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:50:44.584361 1157263 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.591714 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591739 1157263 pod_ready.go:81] duration metric: took 7.35191ms for pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.591746 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "coredns-5dd5756b68-zsfvm" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.591753 1157263 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.597618 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597641 1157263 pod_ready.go:81] duration metric: took 5.880276ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.597649 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "etcd-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.597655 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:44.604124 1157263 pod_ready.go:97] node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604148 1157263 pod_ready.go:81] duration metric: took 6.484251ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	E0318 13:50:44.604157 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-173036" hosting pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-173036" has status "Ready":"False"
	I0318 13:50:44.604164 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:46.611326 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:47.209538 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:49.708718 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.309157 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.808340 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:48.363218 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:48.864086 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.363457 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.863292 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.363308 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:50.863428 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.363583 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:51.863562 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.363995 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:52.863463 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:49.111834 1157263 pod_ready.go:102] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:50.114329 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.114356 1157263 pod_ready.go:81] duration metric: took 5.510175425s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.114369 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133169 1157263 pod_ready.go:92] pod "kube-proxy-xqf68" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:50.133196 1157263 pod_ready.go:81] duration metric: took 18.819059ms for pod "kube-proxy-xqf68" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:50.133208 1157263 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:52.144639 1157263 pod_ready.go:102] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:51.709823 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:54.207738 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.311033 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:55.311439 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:53.363919 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:53.863936 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.363671 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:54.863567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:54.863709 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:54.911905 1157708 cri.go:89] found id: ""
	I0318 13:50:54.911942 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.911954 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:54.911962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:54.912031 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:54.962141 1157708 cri.go:89] found id: ""
	I0318 13:50:54.962170 1157708 logs.go:276] 0 containers: []
	W0318 13:50:54.962182 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:54.962188 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:54.962269 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:55.001597 1157708 cri.go:89] found id: ""
	I0318 13:50:55.001639 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.001652 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:55.001660 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:55.001725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:55.042660 1157708 cri.go:89] found id: ""
	I0318 13:50:55.042695 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.042708 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:55.042716 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:55.042775 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:55.082095 1157708 cri.go:89] found id: ""
	I0318 13:50:55.082128 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.082139 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:55.082146 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:55.082211 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:55.120938 1157708 cri.go:89] found id: ""
	I0318 13:50:55.120969 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.121000 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:55.121008 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:55.121081 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:55.159247 1157708 cri.go:89] found id: ""
	I0318 13:50:55.159280 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.159292 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:55.159300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:55.159366 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:55.200130 1157708 cri.go:89] found id: ""
	I0318 13:50:55.200161 1157708 logs.go:276] 0 containers: []
	W0318 13:50:55.200170 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:55.200180 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:55.200193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:55.254113 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:55.254154 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:55.268984 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:55.269027 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:55.402079 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:55.402106 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:55.402123 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:55.468627 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:55.468674 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:50:54.143220 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:50:54.143247 1157263 pod_ready.go:81] duration metric: took 4.010031997s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:54.143258 1157263 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	I0318 13:50:56.151615 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.650293 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:56.208339 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.209144 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:57.810894 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.308972 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:50:58.016860 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:50:58.031684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:50:58.031747 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:50:58.073389 1157708 cri.go:89] found id: ""
	I0318 13:50:58.073415 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.073427 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:50:58.073434 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:50:58.073497 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:50:58.114439 1157708 cri.go:89] found id: ""
	I0318 13:50:58.114471 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.114483 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:50:58.114490 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:50:58.114553 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:50:58.165440 1157708 cri.go:89] found id: ""
	I0318 13:50:58.165466 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.165476 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:50:58.165484 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:50:58.165569 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:50:58.207083 1157708 cri.go:89] found id: ""
	I0318 13:50:58.207117 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.207129 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:50:58.207137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:50:58.207227 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:50:58.252945 1157708 cri.go:89] found id: ""
	I0318 13:50:58.252973 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.252985 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:50:58.252993 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:50:58.253055 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:50:58.292437 1157708 cri.go:89] found id: ""
	I0318 13:50:58.292464 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.292474 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:50:58.292480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:50:58.292530 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:50:58.335359 1157708 cri.go:89] found id: ""
	I0318 13:50:58.335403 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.335415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:50:58.335423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:50:58.335511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:50:58.381434 1157708 cri.go:89] found id: ""
	I0318 13:50:58.381473 1157708 logs.go:276] 0 containers: []
	W0318 13:50:58.381484 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:50:58.381494 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:50:58.381511 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:50:58.432270 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:50:58.432319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:50:58.447658 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:50:58.447686 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:50:58.523163 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:50:58.523186 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:50:58.523207 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:50:58.599544 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:50:58.599586 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:01.141653 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:01.156996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:01.157070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:01.192720 1157708 cri.go:89] found id: ""
	I0318 13:51:01.192762 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.192775 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:01.192785 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:01.192866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:01.232678 1157708 cri.go:89] found id: ""
	I0318 13:51:01.232705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.232716 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:01.232723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:01.232795 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:01.270637 1157708 cri.go:89] found id: ""
	I0318 13:51:01.270666 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.270676 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:01.270684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:01.270746 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:01.308891 1157708 cri.go:89] found id: ""
	I0318 13:51:01.308921 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.308931 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:01.308939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:01.309003 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:01.349301 1157708 cri.go:89] found id: ""
	I0318 13:51:01.349334 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.349346 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:01.349354 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:01.349420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:01.394010 1157708 cri.go:89] found id: ""
	I0318 13:51:01.394039 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.394047 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:01.394053 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:01.394103 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:01.432778 1157708 cri.go:89] found id: ""
	I0318 13:51:01.432804 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.432815 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:01.432823 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:01.432886 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:01.471974 1157708 cri.go:89] found id: ""
	I0318 13:51:01.472002 1157708 logs.go:276] 0 containers: []
	W0318 13:51:01.472011 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:01.472022 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:01.472040 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:01.524855 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:01.524893 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:01.540939 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:01.540967 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:01.618318 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:01.618350 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:01.618367 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:01.695717 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:01.695755 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:00.650906 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.651512 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:00.211620 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.708336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:02.312320 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.808301 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:04.241781 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:04.256276 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:04.256373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:04.297129 1157708 cri.go:89] found id: ""
	I0318 13:51:04.297158 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.297170 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:04.297179 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:04.297247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:04.341743 1157708 cri.go:89] found id: ""
	I0318 13:51:04.341774 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.341786 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:04.341793 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:04.341858 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:04.384400 1157708 cri.go:89] found id: ""
	I0318 13:51:04.384434 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.384445 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:04.384453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:04.384510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:04.425459 1157708 cri.go:89] found id: ""
	I0318 13:51:04.425487 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.425500 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:04.425510 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:04.425563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:04.463091 1157708 cri.go:89] found id: ""
	I0318 13:51:04.463125 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.463137 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:04.463145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:04.463210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:04.503023 1157708 cri.go:89] found id: ""
	I0318 13:51:04.503057 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.503069 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:04.503077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:04.503141 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:04.542083 1157708 cri.go:89] found id: ""
	I0318 13:51:04.542116 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.542127 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:04.542136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:04.542207 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:04.583097 1157708 cri.go:89] found id: ""
	I0318 13:51:04.583128 1157708 logs.go:276] 0 containers: []
	W0318 13:51:04.583137 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:04.583146 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:04.583161 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:04.650476 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:04.650518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:04.706073 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:04.706111 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:04.723595 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:04.723628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:04.800278 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:04.800301 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:04.800316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:07.388144 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:07.403636 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:07.403711 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:07.443337 1157708 cri.go:89] found id: ""
	I0318 13:51:07.443365 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.443379 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:07.443386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:07.443442 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:07.482417 1157708 cri.go:89] found id: ""
	I0318 13:51:07.482453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.482462 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:07.482469 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:07.482521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:07.518445 1157708 cri.go:89] found id: ""
	I0318 13:51:07.518474 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.518485 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:07.518493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:07.518563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:07.555628 1157708 cri.go:89] found id: ""
	I0318 13:51:07.555661 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.555673 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:07.555681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:07.555760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:07.593805 1157708 cri.go:89] found id: ""
	I0318 13:51:07.593842 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.593856 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:07.593873 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:07.593936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:07.638206 1157708 cri.go:89] found id: ""
	I0318 13:51:07.638234 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.638242 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:07.638249 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:07.638313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:07.679526 1157708 cri.go:89] found id: ""
	I0318 13:51:07.679561 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.679573 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:07.679581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:07.679635 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:07.724468 1157708 cri.go:89] found id: ""
	I0318 13:51:07.724494 1157708 logs.go:276] 0 containers: []
	W0318 13:51:07.724504 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:07.724516 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:07.724533 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:07.766491 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:07.766522 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:07.823782 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:07.823833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:07.839316 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:07.839342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:07.924790 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:07.924821 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:07.924841 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:05.151629 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.651485 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:05.210455 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.709381 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:07.310000 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:09.808337 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.513618 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:10.528711 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:10.528790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:10.571217 1157708 cri.go:89] found id: ""
	I0318 13:51:10.571254 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.571267 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:10.571275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:10.571335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:10.608096 1157708 cri.go:89] found id: ""
	I0318 13:51:10.608129 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.608140 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:10.608149 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:10.608217 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:10.649245 1157708 cri.go:89] found id: ""
	I0318 13:51:10.649274 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.649283 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:10.649290 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:10.649365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:10.693462 1157708 cri.go:89] found id: ""
	I0318 13:51:10.693495 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.693506 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:10.693515 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:10.693589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:10.740434 1157708 cri.go:89] found id: ""
	I0318 13:51:10.740464 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.740474 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:10.740480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:10.740543 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:10.781062 1157708 cri.go:89] found id: ""
	I0318 13:51:10.781099 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.781108 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:10.781114 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:10.781167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:10.828480 1157708 cri.go:89] found id: ""
	I0318 13:51:10.828513 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.828524 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:10.828532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:10.828605 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:10.868508 1157708 cri.go:89] found id: ""
	I0318 13:51:10.868535 1157708 logs.go:276] 0 containers: []
	W0318 13:51:10.868543 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:10.868553 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:10.868565 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:10.923925 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:10.923961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:10.939254 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:10.939283 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:11.031307 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:11.031334 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:11.031351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:11.121563 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:11.121618 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:10.151278 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.650083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:10.209877 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.709070 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:12.308084 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:14.309651 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:16.312985 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:13.681147 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:13.696705 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:13.696812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:13.740904 1157708 cri.go:89] found id: ""
	I0318 13:51:13.740937 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.740949 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:13.740957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:13.741038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:13.779625 1157708 cri.go:89] found id: ""
	I0318 13:51:13.779659 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.779672 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:13.779681 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:13.779762 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:13.822183 1157708 cri.go:89] found id: ""
	I0318 13:51:13.822218 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.822231 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:13.822239 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:13.822302 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:13.873686 1157708 cri.go:89] found id: ""
	I0318 13:51:13.873728 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.873741 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:13.873749 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:13.873821 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:13.919772 1157708 cri.go:89] found id: ""
	I0318 13:51:13.919802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.919811 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:13.919817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:13.919874 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:13.958809 1157708 cri.go:89] found id: ""
	I0318 13:51:13.958837 1157708 logs.go:276] 0 containers: []
	W0318 13:51:13.958846 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:13.958852 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:13.958928 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:14.000537 1157708 cri.go:89] found id: ""
	I0318 13:51:14.000568 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.000580 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:14.000588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:14.000638 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:14.041234 1157708 cri.go:89] found id: ""
	I0318 13:51:14.041265 1157708 logs.go:276] 0 containers: []
	W0318 13:51:14.041275 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:14.041285 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:14.041299 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:14.085435 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:14.085462 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:14.144336 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:14.144374 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:14.159972 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:14.160000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:14.242027 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:14.242048 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:14.242061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:16.821805 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:16.840202 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:16.840272 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:16.898088 1157708 cri.go:89] found id: ""
	I0318 13:51:16.898120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.898129 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:16.898135 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:16.898203 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:16.953180 1157708 cri.go:89] found id: ""
	I0318 13:51:16.953209 1157708 logs.go:276] 0 containers: []
	W0318 13:51:16.953221 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:16.953229 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:16.953288 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:17.006995 1157708 cri.go:89] found id: ""
	I0318 13:51:17.007048 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.007062 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:17.007070 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:17.007136 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:17.049756 1157708 cri.go:89] found id: ""
	I0318 13:51:17.049798 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.049809 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:17.049817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:17.049885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:17.092026 1157708 cri.go:89] found id: ""
	I0318 13:51:17.092055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.092066 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:17.092074 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:17.092144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:17.137722 1157708 cri.go:89] found id: ""
	I0318 13:51:17.137756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.137769 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:17.137778 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:17.137875 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:17.180778 1157708 cri.go:89] found id: ""
	I0318 13:51:17.180808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.180816 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:17.180822 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:17.180885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:17.227629 1157708 cri.go:89] found id: ""
	I0318 13:51:17.227664 1157708 logs.go:276] 0 containers: []
	W0318 13:51:17.227675 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:17.227688 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:17.227706 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:17.272559 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:17.272588 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:17.333953 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:17.333994 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:17.349765 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:17.349793 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:17.434436 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:17.434465 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:17.434483 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:14.650201 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.151069 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:15.208570 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:17.210168 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:19.707753 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:18.808252 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.309389 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:20.014314 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:20.031106 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:20.031172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:20.067727 1157708 cri.go:89] found id: ""
	I0318 13:51:20.067753 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.067765 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:20.067773 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:20.067844 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:20.108455 1157708 cri.go:89] found id: ""
	I0318 13:51:20.108482 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.108491 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:20.108497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:20.108563 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:20.152257 1157708 cri.go:89] found id: ""
	I0318 13:51:20.152285 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.152310 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:20.152317 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:20.152394 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:20.191480 1157708 cri.go:89] found id: ""
	I0318 13:51:20.191509 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.191520 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:20.191529 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:20.191599 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:20.235677 1157708 cri.go:89] found id: ""
	I0318 13:51:20.235705 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.235716 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:20.235723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:20.235796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:20.274794 1157708 cri.go:89] found id: ""
	I0318 13:51:20.274822 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.274833 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:20.274842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:20.274907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:20.321987 1157708 cri.go:89] found id: ""
	I0318 13:51:20.322019 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.322031 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:20.322040 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:20.322097 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:20.361292 1157708 cri.go:89] found id: ""
	I0318 13:51:20.361319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:20.361328 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:20.361338 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:20.361360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:20.434481 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:20.434509 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:20.434527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:20.518203 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:20.518244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:20.560241 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:20.560271 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:20.615489 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:20.615526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:19.151244 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.151320 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.651849 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:21.708423 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:24.207976 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.310491 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:25.808443 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:23.132509 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:23.146447 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:23.146559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:23.189576 1157708 cri.go:89] found id: ""
	I0318 13:51:23.189613 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.189625 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:23.189634 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:23.189688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:23.229700 1157708 cri.go:89] found id: ""
	I0318 13:51:23.229731 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.229740 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:23.229747 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:23.229812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:23.272713 1157708 cri.go:89] found id: ""
	I0318 13:51:23.272747 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.272759 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:23.272768 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:23.272834 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:23.313988 1157708 cri.go:89] found id: ""
	I0318 13:51:23.314014 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.314022 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:23.314028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:23.314087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:23.360195 1157708 cri.go:89] found id: ""
	I0318 13:51:23.360230 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.360243 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:23.360251 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:23.360321 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:23.400657 1157708 cri.go:89] found id: ""
	I0318 13:51:23.400685 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.400694 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:23.400707 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:23.400760 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:23.442841 1157708 cri.go:89] found id: ""
	I0318 13:51:23.442873 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.442893 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:23.442900 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:23.442970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:23.483467 1157708 cri.go:89] found id: ""
	I0318 13:51:23.483504 1157708 logs.go:276] 0 containers: []
	W0318 13:51:23.483516 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:23.483528 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:23.483545 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:23.538581 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:23.538616 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:23.555392 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:23.555421 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:23.634919 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:23.634945 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:23.634970 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:23.718098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:23.718144 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.270369 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:26.287165 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:26.287232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:26.331773 1157708 cri.go:89] found id: ""
	I0318 13:51:26.331807 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.331832 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:26.331850 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:26.331923 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:26.372067 1157708 cri.go:89] found id: ""
	I0318 13:51:26.372095 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.372102 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:26.372109 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:26.372182 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:26.411883 1157708 cri.go:89] found id: ""
	I0318 13:51:26.411910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.411919 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:26.411924 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:26.411980 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:26.449087 1157708 cri.go:89] found id: ""
	I0318 13:51:26.449122 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.449131 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:26.449137 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:26.449188 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:26.492126 1157708 cri.go:89] found id: ""
	I0318 13:51:26.492162 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.492174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:26.492182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:26.492251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:26.529621 1157708 cri.go:89] found id: ""
	I0318 13:51:26.529656 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.529668 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:26.529677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:26.529764 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:26.568853 1157708 cri.go:89] found id: ""
	I0318 13:51:26.568888 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.568899 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:26.568907 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:26.568979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:26.607882 1157708 cri.go:89] found id: ""
	I0318 13:51:26.607917 1157708 logs.go:276] 0 containers: []
	W0318 13:51:26.607929 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:26.607942 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:26.607959 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:26.648736 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:26.648768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:26.704641 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:26.704684 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:26.720681 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:26.720715 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:26.799577 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:26.799608 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:26.799627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:26.152083 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.651445 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:26.208160 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.708468 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:28.309859 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.806690 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:29.389391 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:29.404122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:29.404195 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:29.446761 1157708 cri.go:89] found id: ""
	I0318 13:51:29.446787 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.446796 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:29.446803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:29.446857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:29.483974 1157708 cri.go:89] found id: ""
	I0318 13:51:29.484007 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.484020 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:29.484028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:29.484099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:29.521894 1157708 cri.go:89] found id: ""
	I0318 13:51:29.521922 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.521931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:29.521937 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:29.521993 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:29.562918 1157708 cri.go:89] found id: ""
	I0318 13:51:29.562948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.562957 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:29.562963 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:29.563017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:29.600372 1157708 cri.go:89] found id: ""
	I0318 13:51:29.600412 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.600424 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:29.600432 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:29.600500 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:29.638902 1157708 cri.go:89] found id: ""
	I0318 13:51:29.638933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.638945 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:29.638953 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:29.639019 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:29.679041 1157708 cri.go:89] found id: ""
	I0318 13:51:29.679071 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.679079 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:29.679085 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:29.679142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:29.719168 1157708 cri.go:89] found id: ""
	I0318 13:51:29.719201 1157708 logs.go:276] 0 containers: []
	W0318 13:51:29.719213 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:29.719224 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:29.719244 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:29.764050 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:29.764077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:29.822136 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:29.822174 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:29.839485 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:29.839515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:29.914984 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:29.915006 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:29.915023 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:32.497388 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:32.512151 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:32.512215 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:32.549566 1157708 cri.go:89] found id: ""
	I0318 13:51:32.549602 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.549614 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:32.549623 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:32.549693 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:32.588516 1157708 cri.go:89] found id: ""
	I0318 13:51:32.588546 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.588555 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:32.588562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:32.588615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:32.628425 1157708 cri.go:89] found id: ""
	I0318 13:51:32.628453 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.628462 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:32.628470 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:32.628546 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:32.670851 1157708 cri.go:89] found id: ""
	I0318 13:51:32.670874 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.670888 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:32.670895 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:32.670944 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:32.709614 1157708 cri.go:89] found id: ""
	I0318 13:51:32.709642 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.709656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:32.709666 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:32.709738 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:32.749774 1157708 cri.go:89] found id: ""
	I0318 13:51:32.749808 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.749819 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:32.749828 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:32.749896 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:32.789502 1157708 cri.go:89] found id: ""
	I0318 13:51:32.789525 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.789534 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:32.789540 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:32.789589 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:32.834926 1157708 cri.go:89] found id: ""
	I0318 13:51:32.834948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:32.834956 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:32.834965 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:32.834980 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:32.887365 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:32.887404 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:32.903584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:32.903610 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:32.978924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:32.978958 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:32.978988 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:31.151276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.651395 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:30.709136 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.709549 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:32.808076 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.308827 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:33.055386 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:33.055424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:35.603881 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:35.618083 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:35.618167 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:35.659760 1157708 cri.go:89] found id: ""
	I0318 13:51:35.659802 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.659814 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:35.659820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:35.659881 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:35.703521 1157708 cri.go:89] found id: ""
	I0318 13:51:35.703570 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.703582 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:35.703589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:35.703651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:35.744411 1157708 cri.go:89] found id: ""
	I0318 13:51:35.744444 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.744455 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:35.744463 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:35.744548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:35.783704 1157708 cri.go:89] found id: ""
	I0318 13:51:35.783735 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.783746 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:35.783754 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:35.783819 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:35.824000 1157708 cri.go:89] found id: ""
	I0318 13:51:35.824031 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.824042 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:35.824049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:35.824117 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:35.860260 1157708 cri.go:89] found id: ""
	I0318 13:51:35.860289 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.860299 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:35.860308 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:35.860388 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:35.895154 1157708 cri.go:89] found id: ""
	I0318 13:51:35.895189 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.895201 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:35.895209 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:35.895276 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:35.936916 1157708 cri.go:89] found id: ""
	I0318 13:51:35.936942 1157708 logs.go:276] 0 containers: []
	W0318 13:51:35.936951 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:35.936961 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:35.936977 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:35.951715 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:35.951745 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:36.027431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:36.027457 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:36.027474 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:36.113339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:36.113386 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:36.160132 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:36.160170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:36.151331 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.650891 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:35.208500 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.209692 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.709776 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:37.807423 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:39.809226 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:38.711710 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:38.726104 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:38.726162 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:38.763251 1157708 cri.go:89] found id: ""
	I0318 13:51:38.763281 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.763291 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:38.763300 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:38.763364 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:38.802521 1157708 cri.go:89] found id: ""
	I0318 13:51:38.802548 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.802556 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:38.802562 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:38.802616 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:38.843778 1157708 cri.go:89] found id: ""
	I0318 13:51:38.843817 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.843831 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:38.843839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:38.843909 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:38.884966 1157708 cri.go:89] found id: ""
	I0318 13:51:38.885003 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.885015 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:38.885024 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:38.885090 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:38.925653 1157708 cri.go:89] found id: ""
	I0318 13:51:38.925681 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.925690 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:38.925696 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:38.925757 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:38.964126 1157708 cri.go:89] found id: ""
	I0318 13:51:38.964156 1157708 logs.go:276] 0 containers: []
	W0318 13:51:38.964169 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:38.964177 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:38.964228 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:39.004864 1157708 cri.go:89] found id: ""
	I0318 13:51:39.004898 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.004910 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:39.004919 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:39.004991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:39.041555 1157708 cri.go:89] found id: ""
	I0318 13:51:39.041588 1157708 logs.go:276] 0 containers: []
	W0318 13:51:39.041600 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:39.041611 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:39.041626 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:39.092984 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:39.093019 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:39.110492 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:39.110526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:39.186785 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:39.186848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:39.186872 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:39.272847 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:39.272891 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:41.829404 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:41.843407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:41.843479 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:41.883129 1157708 cri.go:89] found id: ""
	I0318 13:51:41.883164 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.883175 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:41.883184 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:41.883246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:41.924083 1157708 cri.go:89] found id: ""
	I0318 13:51:41.924123 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.924136 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:41.924144 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:41.924209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:41.963029 1157708 cri.go:89] found id: ""
	I0318 13:51:41.963058 1157708 logs.go:276] 0 containers: []
	W0318 13:51:41.963069 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:41.963084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:41.963155 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:42.003393 1157708 cri.go:89] found id: ""
	I0318 13:51:42.003430 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.003442 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:42.003450 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:42.003511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:42.041938 1157708 cri.go:89] found id: ""
	I0318 13:51:42.041968 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.041977 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:42.041983 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:42.042044 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:42.079685 1157708 cri.go:89] found id: ""
	I0318 13:51:42.079718 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.079731 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:42.079740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:42.079805 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:42.118112 1157708 cri.go:89] found id: ""
	I0318 13:51:42.118144 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.118156 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:42.118164 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:42.118230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:42.157287 1157708 cri.go:89] found id: ""
	I0318 13:51:42.157319 1157708 logs.go:276] 0 containers: []
	W0318 13:51:42.157331 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:42.157343 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:42.157360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:42.213006 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:42.213038 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:42.228452 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:42.228481 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:42.302523 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:42.302545 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:42.302558 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:42.387994 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:42.388062 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:40.651272 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:43.151009 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.208825 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.211676 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:42.310765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.313778 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:44.934501 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:44.949163 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:44.949245 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:44.991885 1157708 cri.go:89] found id: ""
	I0318 13:51:44.991914 1157708 logs.go:276] 0 containers: []
	W0318 13:51:44.991924 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:44.991931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:44.992008 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:45.029868 1157708 cri.go:89] found id: ""
	I0318 13:51:45.029904 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.029915 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:45.029922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:45.030017 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:45.067755 1157708 cri.go:89] found id: ""
	I0318 13:51:45.067785 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.067794 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:45.067803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:45.067857 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:45.106296 1157708 cri.go:89] found id: ""
	I0318 13:51:45.106323 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.106333 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:45.106339 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:45.106405 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:45.145746 1157708 cri.go:89] found id: ""
	I0318 13:51:45.145784 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.145797 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:45.145805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:45.145868 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:45.191960 1157708 cri.go:89] found id: ""
	I0318 13:51:45.191998 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.192010 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:45.192019 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:45.192089 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:45.231436 1157708 cri.go:89] found id: ""
	I0318 13:51:45.231470 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.231483 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:45.231491 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:45.231559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:45.274521 1157708 cri.go:89] found id: ""
	I0318 13:51:45.274554 1157708 logs.go:276] 0 containers: []
	W0318 13:51:45.274565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:45.274577 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:45.274595 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:45.338539 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:45.338580 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:45.353917 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:45.353947 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:45.447734 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:45.447755 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:45.447768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:45.530098 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:45.530140 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:45.653161 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.150841 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.708808 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.209076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:46.808315 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:49.311406 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:48.077992 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:48.092203 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:48.092273 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:48.133136 1157708 cri.go:89] found id: ""
	I0318 13:51:48.133172 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.133183 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:48.133191 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:48.133259 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:48.177727 1157708 cri.go:89] found id: ""
	I0318 13:51:48.177756 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.177768 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:48.177775 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:48.177843 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:48.217574 1157708 cri.go:89] found id: ""
	I0318 13:51:48.217600 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.217608 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:48.217614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:48.217676 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:48.258900 1157708 cri.go:89] found id: ""
	I0318 13:51:48.258933 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.258947 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:48.258955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:48.259046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:48.299527 1157708 cri.go:89] found id: ""
	I0318 13:51:48.299562 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.299573 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:48.299581 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:48.299650 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:48.339692 1157708 cri.go:89] found id: ""
	I0318 13:51:48.339723 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.339732 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:48.339740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:48.339791 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:48.378737 1157708 cri.go:89] found id: ""
	I0318 13:51:48.378764 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.378773 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:48.378779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:48.378841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:48.414593 1157708 cri.go:89] found id: ""
	I0318 13:51:48.414621 1157708 logs.go:276] 0 containers: []
	W0318 13:51:48.414629 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:48.414639 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:48.414654 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:48.430232 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:48.430264 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:48.513313 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:48.513335 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:48.513353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:48.594681 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:48.594721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:48.638681 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:48.638720 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.189510 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:51.204296 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:51.204383 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:51.248285 1157708 cri.go:89] found id: ""
	I0318 13:51:51.248311 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.248331 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:51.248340 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:51.248414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:51.289022 1157708 cri.go:89] found id: ""
	I0318 13:51:51.289055 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.289068 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:51.289077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:51.289144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:51.329367 1157708 cri.go:89] found id: ""
	I0318 13:51:51.329405 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.329414 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:51.329420 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:51.329477 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:51.370909 1157708 cri.go:89] found id: ""
	I0318 13:51:51.370948 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.370960 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:51.370970 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:51.371043 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:51.419447 1157708 cri.go:89] found id: ""
	I0318 13:51:51.419486 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.419498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:51.419506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:51.419573 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:51.466302 1157708 cri.go:89] found id: ""
	I0318 13:51:51.466336 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.466348 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:51.466356 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:51.466441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:51.505593 1157708 cri.go:89] found id: ""
	I0318 13:51:51.505631 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.505644 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:51.505652 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:51.505724 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:51.543815 1157708 cri.go:89] found id: ""
	I0318 13:51:51.543843 1157708 logs.go:276] 0 containers: []
	W0318 13:51:51.543852 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:51.543863 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:51.543885 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:51.596271 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:51.596305 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:51.612441 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:51.612477 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:51.690591 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:51.690614 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:51.690631 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:51.771781 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:51.771821 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:50.650088 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:52.650307 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.710583 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.208629 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:51.808743 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.309915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:54.319626 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:54.334041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:54.334113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:54.372090 1157708 cri.go:89] found id: ""
	I0318 13:51:54.372120 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.372132 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:54.372139 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:54.372196 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:54.412513 1157708 cri.go:89] found id: ""
	I0318 13:51:54.412567 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.412580 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:54.412588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:54.412662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:54.453143 1157708 cri.go:89] found id: ""
	I0318 13:51:54.453176 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.453188 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:54.453196 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:54.453262 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:54.497908 1157708 cri.go:89] found id: ""
	I0318 13:51:54.497940 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.497949 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:54.497957 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:54.498025 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:54.539044 1157708 cri.go:89] found id: ""
	I0318 13:51:54.539072 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.539081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:54.539086 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:54.539151 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:54.578916 1157708 cri.go:89] found id: ""
	I0318 13:51:54.578944 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.578951 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:54.578958 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:54.579027 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:54.617339 1157708 cri.go:89] found id: ""
	I0318 13:51:54.617366 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.617375 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:54.617380 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:54.617436 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:54.661288 1157708 cri.go:89] found id: ""
	I0318 13:51:54.661309 1157708 logs.go:276] 0 containers: []
	W0318 13:51:54.661318 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:54.661328 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:54.661344 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:54.740710 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:54.740751 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:54.789136 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:54.789176 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.844585 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:54.844627 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:54.860304 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:54.860351 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:54.945305 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:51:57.445800 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:51:57.459294 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:51:57.459368 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:51:57.497411 1157708 cri.go:89] found id: ""
	I0318 13:51:57.497441 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.497449 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:51:57.497456 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:51:57.497521 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:51:57.535629 1157708 cri.go:89] found id: ""
	I0318 13:51:57.535663 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.535675 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:51:57.535684 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:51:57.535749 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:51:57.572980 1157708 cri.go:89] found id: ""
	I0318 13:51:57.573008 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.573017 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:51:57.573023 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:51:57.573071 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:51:57.622949 1157708 cri.go:89] found id: ""
	I0318 13:51:57.622984 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.622997 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:51:57.623005 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:51:57.623070 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:51:57.659877 1157708 cri.go:89] found id: ""
	I0318 13:51:57.659910 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.659921 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:51:57.659928 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:51:57.659991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:51:57.705399 1157708 cri.go:89] found id: ""
	I0318 13:51:57.705481 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.705495 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:51:57.705504 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:51:57.705566 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:51:57.748035 1157708 cri.go:89] found id: ""
	I0318 13:51:57.748062 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.748073 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:51:57.748084 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:51:57.748144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:51:57.801942 1157708 cri.go:89] found id: ""
	I0318 13:51:57.801976 1157708 logs.go:276] 0 containers: []
	W0318 13:51:57.801987 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:51:57.801999 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:51:57.802017 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:51:57.900157 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:51:57.900204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:51:57.946179 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:51:57.946219 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:51:54.651363 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:57.151268 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.208925 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.708089 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:56.807605 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.808479 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.307740 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:51:58.000369 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:51:58.000412 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:58.016179 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:51:58.016211 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:51:58.101766 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:00.602151 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:00.617466 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:00.617531 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:00.661294 1157708 cri.go:89] found id: ""
	I0318 13:52:00.661328 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.661336 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:00.661342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:00.661400 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:00.706227 1157708 cri.go:89] found id: ""
	I0318 13:52:00.706257 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.706267 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:00.706275 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:00.706342 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:00.746482 1157708 cri.go:89] found id: ""
	I0318 13:52:00.746515 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.746528 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:00.746536 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:00.746600 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:00.789242 1157708 cri.go:89] found id: ""
	I0318 13:52:00.789272 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.789281 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:00.789287 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:00.789348 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:00.832463 1157708 cri.go:89] found id: ""
	I0318 13:52:00.832503 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.832514 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:00.832522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:00.832581 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:00.869790 1157708 cri.go:89] found id: ""
	I0318 13:52:00.869819 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.869830 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:00.869839 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:00.869904 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:00.909656 1157708 cri.go:89] found id: ""
	I0318 13:52:00.909685 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.909693 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:00.909700 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:00.909754 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:00.953818 1157708 cri.go:89] found id: ""
	I0318 13:52:00.953856 1157708 logs.go:276] 0 containers: []
	W0318 13:52:00.953868 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:00.953882 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:00.953898 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:01.032822 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:01.032848 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:01.032865 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:01.111701 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:01.111747 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:01.168270 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:01.168300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:01.220376 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:01.220408 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:51:59.650359 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:01.650627 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.651830 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:00.709561 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.207829 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.808915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:06.307915 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:03.737354 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:03.756282 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:03.756382 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:03.804716 1157708 cri.go:89] found id: ""
	I0318 13:52:03.804757 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.804768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:03.804777 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:03.804838 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:03.864559 1157708 cri.go:89] found id: ""
	I0318 13:52:03.864596 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.864609 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:03.864617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:03.864687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:03.918397 1157708 cri.go:89] found id: ""
	I0318 13:52:03.918425 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.918433 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:03.918439 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:03.918504 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:03.961729 1157708 cri.go:89] found id: ""
	I0318 13:52:03.961762 1157708 logs.go:276] 0 containers: []
	W0318 13:52:03.961773 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:03.961780 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:03.961856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:04.006261 1157708 cri.go:89] found id: ""
	I0318 13:52:04.006299 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.006311 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:04.006319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:04.006404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:04.050284 1157708 cri.go:89] found id: ""
	I0318 13:52:04.050313 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.050321 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:04.050327 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:04.050384 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:04.093789 1157708 cri.go:89] found id: ""
	I0318 13:52:04.093827 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.093839 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:04.093847 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:04.093916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:04.135047 1157708 cri.go:89] found id: ""
	I0318 13:52:04.135091 1157708 logs.go:276] 0 containers: []
	W0318 13:52:04.135110 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:04.135124 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:04.135142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:04.192899 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:04.192937 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:04.209080 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:04.209130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:04.286388 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:04.286413 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:04.286428 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:04.371836 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:04.371877 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:06.923039 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:06.938743 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:06.938826 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:06.984600 1157708 cri.go:89] found id: ""
	I0318 13:52:06.984634 1157708 logs.go:276] 0 containers: []
	W0318 13:52:06.984646 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:06.984655 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:06.984721 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:07.023849 1157708 cri.go:89] found id: ""
	I0318 13:52:07.023891 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.023914 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:07.023922 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:07.023984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:07.071972 1157708 cri.go:89] found id: ""
	I0318 13:52:07.072002 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.072015 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:07.072022 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:07.072087 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:07.109070 1157708 cri.go:89] found id: ""
	I0318 13:52:07.109105 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.109118 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:07.109126 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:07.109183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:07.149879 1157708 cri.go:89] found id: ""
	I0318 13:52:07.149910 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.149918 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:07.149925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:07.149990 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:07.195946 1157708 cri.go:89] found id: ""
	I0318 13:52:07.195976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.195987 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:07.195995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:07.196062 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:07.238126 1157708 cri.go:89] found id: ""
	I0318 13:52:07.238152 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.238162 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:07.238168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:07.238233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:07.278218 1157708 cri.go:89] found id: ""
	I0318 13:52:07.278255 1157708 logs.go:276] 0 containers: []
	W0318 13:52:07.278268 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:07.278282 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:07.278300 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:07.294926 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:07.294955 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:07.383431 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:07.383455 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:07.383468 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:07.467306 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:07.467348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:07.515996 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:07.516028 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:06.151546 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.162392 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:05.208765 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:07.210243 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:09.708076 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:08.309045 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.807773 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:10.071945 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:10.088587 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:10.088654 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:10.130528 1157708 cri.go:89] found id: ""
	I0318 13:52:10.130566 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.130579 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:10.130588 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:10.130663 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:10.173113 1157708 cri.go:89] found id: ""
	I0318 13:52:10.173150 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.173168 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:10.173178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:10.173243 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:10.218941 1157708 cri.go:89] found id: ""
	I0318 13:52:10.218976 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.218987 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:10.218996 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:10.219068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:10.262331 1157708 cri.go:89] found id: ""
	I0318 13:52:10.262368 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.262381 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:10.262389 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:10.262460 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:10.303329 1157708 cri.go:89] found id: ""
	I0318 13:52:10.303363 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.303378 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:10.303386 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:10.303457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:10.344458 1157708 cri.go:89] found id: ""
	I0318 13:52:10.344486 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.344497 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:10.344505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:10.344567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:10.386753 1157708 cri.go:89] found id: ""
	I0318 13:52:10.386786 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.386797 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:10.386806 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:10.386876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:10.425922 1157708 cri.go:89] found id: ""
	I0318 13:52:10.425954 1157708 logs.go:276] 0 containers: []
	W0318 13:52:10.425965 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:10.425978 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:10.426000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:10.441134 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:10.441168 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:10.514865 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:10.514899 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:10.514916 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:10.592061 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:10.592105 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:10.642900 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:10.642935 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:10.651432 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.150537 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.208498 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:14.209684 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:12.808250 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:15.308639 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:13.199176 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:13.215155 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:13.215232 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:13.256107 1157708 cri.go:89] found id: ""
	I0318 13:52:13.256139 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.256151 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:13.256160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:13.256231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:13.296562 1157708 cri.go:89] found id: ""
	I0318 13:52:13.296597 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.296608 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:13.296615 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:13.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:13.336633 1157708 cri.go:89] found id: ""
	I0318 13:52:13.336662 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.336672 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:13.336678 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:13.336737 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:13.382597 1157708 cri.go:89] found id: ""
	I0318 13:52:13.382639 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.382654 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:13.382663 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:13.382733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:13.430257 1157708 cri.go:89] found id: ""
	I0318 13:52:13.430292 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.430304 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:13.430312 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:13.430373 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:13.466854 1157708 cri.go:89] found id: ""
	I0318 13:52:13.466881 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.466889 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:13.466896 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:13.466945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:13.510297 1157708 cri.go:89] found id: ""
	I0318 13:52:13.510333 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.510344 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:13.510352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:13.510420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:13.551476 1157708 cri.go:89] found id: ""
	I0318 13:52:13.551508 1157708 logs.go:276] 0 containers: []
	W0318 13:52:13.551517 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:13.551528 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:13.551542 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:13.634561 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:13.634585 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:13.634598 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:13.720088 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:13.720129 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:13.760621 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:13.760659 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:13.817311 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:13.817350 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.334094 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:16.349779 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:16.349866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:16.394131 1157708 cri.go:89] found id: ""
	I0318 13:52:16.394157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.394167 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:16.394175 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:16.394239 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:16.438185 1157708 cri.go:89] found id: ""
	I0318 13:52:16.438232 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.438245 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:16.438264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:16.438335 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:16.476872 1157708 cri.go:89] found id: ""
	I0318 13:52:16.476920 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.476932 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:16.476939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:16.477007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:16.518226 1157708 cri.go:89] found id: ""
	I0318 13:52:16.518253 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.518262 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:16.518269 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:16.518327 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:16.559119 1157708 cri.go:89] found id: ""
	I0318 13:52:16.559160 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.559174 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:16.559182 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:16.559260 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:16.600050 1157708 cri.go:89] found id: ""
	I0318 13:52:16.600079 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.600088 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:16.600094 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:16.600160 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:16.640621 1157708 cri.go:89] found id: ""
	I0318 13:52:16.640649 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.640660 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:16.640668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:16.640733 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:16.680541 1157708 cri.go:89] found id: ""
	I0318 13:52:16.680571 1157708 logs.go:276] 0 containers: []
	W0318 13:52:16.680580 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:16.680590 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:16.680602 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:16.766378 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:16.766415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:16.811846 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:16.811883 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:16.871940 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:16.871981 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:16.887494 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:16.887521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:16.961924 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:15.650599 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.650902 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:16.710336 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.207426 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:17.807338 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.809418 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:19.462316 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:19.478819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:19.478885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:19.523280 1157708 cri.go:89] found id: ""
	I0318 13:52:19.523314 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.523334 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:19.523342 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:19.523417 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:19.560675 1157708 cri.go:89] found id: ""
	I0318 13:52:19.560708 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.560717 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:19.560725 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:19.560790 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:19.598739 1157708 cri.go:89] found id: ""
	I0318 13:52:19.598766 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.598773 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:19.598781 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:19.598846 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:19.639928 1157708 cri.go:89] found id: ""
	I0318 13:52:19.639960 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.639969 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:19.639975 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:19.640030 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:19.686084 1157708 cri.go:89] found id: ""
	I0318 13:52:19.686134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.686153 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:19.686160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:19.686231 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:19.725449 1157708 cri.go:89] found id: ""
	I0318 13:52:19.725481 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.725491 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:19.725497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:19.725559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:19.763855 1157708 cri.go:89] found id: ""
	I0318 13:52:19.763886 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.763897 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:19.763905 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:19.763976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:19.805783 1157708 cri.go:89] found id: ""
	I0318 13:52:19.805813 1157708 logs.go:276] 0 containers: []
	W0318 13:52:19.805824 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:19.805836 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:19.805852 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.883873 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:19.883914 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:19.926368 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:19.926406 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:19.981137 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:19.981181 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:19.996242 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:19.996269 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:20.077880 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:22.578045 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:22.594170 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:22.594247 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:22.637241 1157708 cri.go:89] found id: ""
	I0318 13:52:22.637276 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.637289 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:22.637298 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:22.637363 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:22.679877 1157708 cri.go:89] found id: ""
	I0318 13:52:22.679904 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.679912 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:22.679918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:22.679981 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:22.721865 1157708 cri.go:89] found id: ""
	I0318 13:52:22.721890 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.721903 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:22.721912 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:22.721982 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:22.763208 1157708 cri.go:89] found id: ""
	I0318 13:52:22.763242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.763255 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:22.763264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:22.763329 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:22.802038 1157708 cri.go:89] found id: ""
	I0318 13:52:22.802071 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.802081 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:22.802089 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:22.802170 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:22.841206 1157708 cri.go:89] found id: ""
	I0318 13:52:22.841242 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.841254 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:22.841263 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:22.841328 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:22.885159 1157708 cri.go:89] found id: ""
	I0318 13:52:22.885197 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.885209 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:22.885218 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:22.885289 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:22.925346 1157708 cri.go:89] found id: ""
	I0318 13:52:22.925373 1157708 logs.go:276] 0 containers: []
	W0318 13:52:22.925382 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:22.925391 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:22.925407 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:19.654611 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.152365 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:21.208979 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.210660 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:22.308290 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:24.310006 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:23.006158 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:23.006193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:23.053932 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:23.053961 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:23.107728 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:23.107768 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:23.125708 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:23.125740 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:23.202609 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:25.703096 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:25.718617 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:25.718689 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:25.756504 1157708 cri.go:89] found id: ""
	I0318 13:52:25.756530 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.756538 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:25.756544 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:25.756608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:25.795103 1157708 cri.go:89] found id: ""
	I0318 13:52:25.795140 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.795152 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:25.795160 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:25.795240 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:25.839908 1157708 cri.go:89] found id: ""
	I0318 13:52:25.839945 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.839957 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:25.839971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:25.840038 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:25.881677 1157708 cri.go:89] found id: ""
	I0318 13:52:25.881711 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.881723 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:25.881732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:25.881802 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:25.923356 1157708 cri.go:89] found id: ""
	I0318 13:52:25.923386 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.923397 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:25.923410 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:25.923469 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:25.961661 1157708 cri.go:89] found id: ""
	I0318 13:52:25.961693 1157708 logs.go:276] 0 containers: []
	W0318 13:52:25.961705 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:25.961713 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:25.961785 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:26.003198 1157708 cri.go:89] found id: ""
	I0318 13:52:26.003236 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.003248 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:26.003256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:26.003319 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:26.041436 1157708 cri.go:89] found id: ""
	I0318 13:52:26.041471 1157708 logs.go:276] 0 containers: []
	W0318 13:52:26.041483 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:26.041496 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:26.041515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:26.056679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:26.056716 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:26.143900 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:26.143926 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:26.143946 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:26.226929 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:26.226964 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:26.288519 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:26.288560 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:24.652661 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.152317 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:25.708488 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:27.708931 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:26.807624 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.809030 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.308980 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:28.846205 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:28.861117 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:28.861190 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:28.906990 1157708 cri.go:89] found id: ""
	I0318 13:52:28.907022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.907030 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:28.907036 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:28.907099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:28.946271 1157708 cri.go:89] found id: ""
	I0318 13:52:28.946309 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.946322 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:28.946332 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:28.946403 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:28.990158 1157708 cri.go:89] found id: ""
	I0318 13:52:28.990185 1157708 logs.go:276] 0 containers: []
	W0318 13:52:28.990193 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:28.990199 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:28.990251 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:29.035089 1157708 cri.go:89] found id: ""
	I0318 13:52:29.035123 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.035134 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:29.035143 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:29.035209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:29.076991 1157708 cri.go:89] found id: ""
	I0318 13:52:29.077022 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.077033 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:29.077041 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:29.077104 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:29.117106 1157708 cri.go:89] found id: ""
	I0318 13:52:29.117134 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.117150 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:29.117157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:29.117209 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:29.159675 1157708 cri.go:89] found id: ""
	I0318 13:52:29.159704 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.159714 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:29.159722 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:29.159787 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:29.202130 1157708 cri.go:89] found id: ""
	I0318 13:52:29.202157 1157708 logs.go:276] 0 containers: []
	W0318 13:52:29.202166 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:29.202176 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:29.202189 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:29.258343 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:29.258390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:29.275314 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:29.275360 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:29.359842 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:29.359989 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:29.360036 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:29.446021 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:29.446072 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:31.990431 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:32.007443 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:32.007508 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:32.051028 1157708 cri.go:89] found id: ""
	I0318 13:52:32.051061 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.051070 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:32.051076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:32.051144 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:32.092914 1157708 cri.go:89] found id: ""
	I0318 13:52:32.092950 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.092962 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:32.092972 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:32.093045 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:32.154257 1157708 cri.go:89] found id: ""
	I0318 13:52:32.154291 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.154302 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:32.154309 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:32.154375 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:32.200185 1157708 cri.go:89] found id: ""
	I0318 13:52:32.200224 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.200236 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:32.200244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:32.200309 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:32.248927 1157708 cri.go:89] found id: ""
	I0318 13:52:32.248961 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.248974 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:32.248982 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:32.249051 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:32.289829 1157708 cri.go:89] found id: ""
	I0318 13:52:32.289861 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.289870 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:32.289876 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:32.289934 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:32.334346 1157708 cri.go:89] found id: ""
	I0318 13:52:32.334379 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.334387 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:32.334393 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:32.334457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:32.378718 1157708 cri.go:89] found id: ""
	I0318 13:52:32.378761 1157708 logs.go:276] 0 containers: []
	W0318 13:52:32.378770 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:32.378780 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:32.378795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:32.434626 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:32.434667 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:32.451366 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:32.451402 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:32.532868 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:32.532907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:32.532924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:32.617556 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:32.617597 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:29.650409 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:31.651019 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:30.207993 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:32.214101 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:34.710602 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:33.807499 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.807738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:35.165067 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:35.181325 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:35.181404 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:35.220570 1157708 cri.go:89] found id: ""
	I0318 13:52:35.220601 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.220612 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:35.220619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:35.220684 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:35.263798 1157708 cri.go:89] found id: ""
	I0318 13:52:35.263830 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.263841 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:35.263848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:35.263915 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:35.309447 1157708 cri.go:89] found id: ""
	I0318 13:52:35.309477 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.309489 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:35.309497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:35.309567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:35.353444 1157708 cri.go:89] found id: ""
	I0318 13:52:35.353472 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.353484 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:35.353493 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:35.353556 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:35.394563 1157708 cri.go:89] found id: ""
	I0318 13:52:35.394591 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.394599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:35.394604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:35.394662 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:35.433866 1157708 cri.go:89] found id: ""
	I0318 13:52:35.433899 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.433908 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:35.433915 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:35.433970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:35.482769 1157708 cri.go:89] found id: ""
	I0318 13:52:35.482808 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.482820 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:35.482829 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:35.482899 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:35.521465 1157708 cri.go:89] found id: ""
	I0318 13:52:35.521498 1157708 logs.go:276] 0 containers: []
	W0318 13:52:35.521509 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:35.521520 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:35.521534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:35.577759 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:35.577799 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:35.593052 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:35.593084 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:35.672751 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:35.672773 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:35.672787 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:35.752118 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:35.752171 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:34.157429 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:36.650725 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.652096 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:37.209435 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:39.710020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.312679 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:40.807379 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:38.296677 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:38.312261 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:38.312365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:38.350328 1157708 cri.go:89] found id: ""
	I0318 13:52:38.350362 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.350374 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:38.350382 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:38.350457 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:38.389891 1157708 cri.go:89] found id: ""
	I0318 13:52:38.389927 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.389939 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:38.389947 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:38.390005 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:38.430268 1157708 cri.go:89] found id: ""
	I0318 13:52:38.430296 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.430305 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:38.430311 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:38.430365 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:38.470830 1157708 cri.go:89] found id: ""
	I0318 13:52:38.470859 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.470873 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:38.470880 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:38.470945 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:38.510501 1157708 cri.go:89] found id: ""
	I0318 13:52:38.510538 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.510552 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:38.510560 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:38.510618 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:38.594899 1157708 cri.go:89] found id: ""
	I0318 13:52:38.594926 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.594935 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:38.594942 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:38.595021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:38.649095 1157708 cri.go:89] found id: ""
	I0318 13:52:38.649121 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.649129 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:38.649136 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:38.649192 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:38.695263 1157708 cri.go:89] found id: ""
	I0318 13:52:38.695295 1157708 logs.go:276] 0 containers: []
	W0318 13:52:38.695307 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:38.695320 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:38.695336 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:38.780624 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:38.780666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:38.825294 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:38.825335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:38.877548 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:38.877596 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:38.893289 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:38.893319 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:38.971752 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.472865 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:41.487371 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:41.487484 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:41.524691 1157708 cri.go:89] found id: ""
	I0318 13:52:41.524724 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.524737 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:41.524746 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:41.524812 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:41.564094 1157708 cri.go:89] found id: ""
	I0318 13:52:41.564125 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.564137 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:41.564145 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:41.564210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:41.600019 1157708 cri.go:89] found id: ""
	I0318 13:52:41.600047 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.600058 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:41.600064 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:41.600142 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:41.638320 1157708 cri.go:89] found id: ""
	I0318 13:52:41.638350 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.638363 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:41.638372 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:41.638438 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:41.680763 1157708 cri.go:89] found id: ""
	I0318 13:52:41.680798 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.680810 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:41.680818 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:41.680894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:41.720645 1157708 cri.go:89] found id: ""
	I0318 13:52:41.720674 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.720683 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:41.720690 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:41.720741 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:41.759121 1157708 cri.go:89] found id: ""
	I0318 13:52:41.759151 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.759185 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:41.759195 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:41.759264 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:41.797006 1157708 cri.go:89] found id: ""
	I0318 13:52:41.797034 1157708 logs.go:276] 0 containers: []
	W0318 13:52:41.797043 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:41.797053 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:41.797070 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:41.853315 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:41.853353 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:41.869920 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:41.869952 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:41.947187 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:41.947219 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:41.947235 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:42.025475 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:42.025515 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:41.151466 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.153616 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:42.207999 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.709760 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:43.310812 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:45.808394 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:44.574724 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:44.598990 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:44.599068 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:44.649051 1157708 cri.go:89] found id: ""
	I0318 13:52:44.649137 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.649168 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:44.649180 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:44.649254 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:44.686423 1157708 cri.go:89] found id: ""
	I0318 13:52:44.686459 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.686468 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:44.686473 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:44.686536 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:44.726534 1157708 cri.go:89] found id: ""
	I0318 13:52:44.726564 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.726575 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:44.726583 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:44.726653 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:44.771190 1157708 cri.go:89] found id: ""
	I0318 13:52:44.771220 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.771232 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:44.771240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:44.771311 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:44.811577 1157708 cri.go:89] found id: ""
	I0318 13:52:44.811602 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.811611 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:44.811618 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:44.811677 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:44.850717 1157708 cri.go:89] found id: ""
	I0318 13:52:44.850744 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.850756 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:44.850765 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:44.850824 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:44.890294 1157708 cri.go:89] found id: ""
	I0318 13:52:44.890321 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.890330 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:44.890344 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:44.890401 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:44.930690 1157708 cri.go:89] found id: ""
	I0318 13:52:44.930720 1157708 logs.go:276] 0 containers: []
	W0318 13:52:44.930730 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:44.930741 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:44.930757 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:44.946509 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:44.946544 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:45.029748 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:45.029777 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:45.029795 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:45.111348 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:45.111392 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:45.165156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:45.165193 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:47.720701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:47.734457 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:47.734520 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:47.771273 1157708 cri.go:89] found id: ""
	I0318 13:52:47.771304 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.771313 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:47.771319 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:47.771370 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:47.813779 1157708 cri.go:89] found id: ""
	I0318 13:52:47.813806 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.813816 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:47.813824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:47.813892 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:47.855547 1157708 cri.go:89] found id: ""
	I0318 13:52:47.855576 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.855584 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:47.855590 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:47.855640 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:47.892651 1157708 cri.go:89] found id: ""
	I0318 13:52:47.892684 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.892692 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:47.892697 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:47.892752 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:47.935457 1157708 cri.go:89] found id: ""
	I0318 13:52:47.935488 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.935498 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:47.935505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:47.935567 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:47.969335 1157708 cri.go:89] found id: ""
	I0318 13:52:47.969361 1157708 logs.go:276] 0 containers: []
	W0318 13:52:47.969370 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:47.969377 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:47.969441 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:45.651171 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.151833 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:47.209014 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:49.710231 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.310467 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:50.807495 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:48.007305 1157708 cri.go:89] found id: ""
	I0318 13:52:48.007339 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.007349 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:48.007355 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:48.007416 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:48.050230 1157708 cri.go:89] found id: ""
	I0318 13:52:48.050264 1157708 logs.go:276] 0 containers: []
	W0318 13:52:48.050276 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:48.050289 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:48.050304 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:48.106946 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:48.106993 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:48.123805 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:48.123837 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:48.201881 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:48.201907 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:48.201920 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:48.281533 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:48.281577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:50.829561 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:50.847462 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:50.847555 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:50.889731 1157708 cri.go:89] found id: ""
	I0318 13:52:50.889759 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.889768 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:50.889774 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:50.889831 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:50.928176 1157708 cri.go:89] found id: ""
	I0318 13:52:50.928210 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.928222 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:50.928231 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:50.928294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:50.965737 1157708 cri.go:89] found id: ""
	I0318 13:52:50.965772 1157708 logs.go:276] 0 containers: []
	W0318 13:52:50.965786 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:50.965794 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:50.965866 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:51.008038 1157708 cri.go:89] found id: ""
	I0318 13:52:51.008072 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.008081 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:51.008087 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:51.008159 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:51.050310 1157708 cri.go:89] found id: ""
	I0318 13:52:51.050340 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.050355 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:51.050363 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:51.050431 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:51.090514 1157708 cri.go:89] found id: ""
	I0318 13:52:51.090541 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.090550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:51.090556 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:51.090608 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:51.131278 1157708 cri.go:89] found id: ""
	I0318 13:52:51.131305 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.131313 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:51.131320 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:51.131381 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:51.173370 1157708 cri.go:89] found id: ""
	I0318 13:52:51.173400 1157708 logs.go:276] 0 containers: []
	W0318 13:52:51.173411 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:51.173437 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:51.173464 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:51.260155 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:51.260204 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:51.309963 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:51.309998 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:51.367838 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:51.367889 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:51.382542 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:51.382570 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:51.459258 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:50.650524 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.651804 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.208655 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:54.209701 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:52.808292 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:55.309417 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:53.960212 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:53.978939 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:53.979004 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:54.030003 1157708 cri.go:89] found id: ""
	I0318 13:52:54.030038 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.030052 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:54.030060 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:54.030134 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:54.073487 1157708 cri.go:89] found id: ""
	I0318 13:52:54.073523 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.073535 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:54.073543 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:54.073611 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:54.115982 1157708 cri.go:89] found id: ""
	I0318 13:52:54.116010 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.116022 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:54.116029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:54.116099 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:54.158320 1157708 cri.go:89] found id: ""
	I0318 13:52:54.158348 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.158359 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:54.158366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:54.158433 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:54.198911 1157708 cri.go:89] found id: ""
	I0318 13:52:54.198939 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.198948 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:54.198955 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:54.199010 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:54.240628 1157708 cri.go:89] found id: ""
	I0318 13:52:54.240659 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.240671 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:54.240679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:54.240750 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:54.279377 1157708 cri.go:89] found id: ""
	I0318 13:52:54.279409 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.279418 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:54.279424 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:54.279493 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:54.324160 1157708 cri.go:89] found id: ""
	I0318 13:52:54.324192 1157708 logs.go:276] 0 containers: []
	W0318 13:52:54.324205 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:54.324218 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:54.324237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:54.371487 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:54.371527 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:54.423487 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:54.423526 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:54.438773 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:54.438800 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:54.518788 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:54.518810 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:54.518825 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.103590 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:52:57.118866 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:52:57.118932 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:52:57.159354 1157708 cri.go:89] found id: ""
	I0318 13:52:57.159383 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.159393 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:52:57.159399 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:52:57.159458 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:52:57.201114 1157708 cri.go:89] found id: ""
	I0318 13:52:57.201148 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.201159 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:52:57.201167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:52:57.201233 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:52:57.242172 1157708 cri.go:89] found id: ""
	I0318 13:52:57.242207 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.242217 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:52:57.242224 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:52:57.242287 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:52:57.282578 1157708 cri.go:89] found id: ""
	I0318 13:52:57.282617 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.282629 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:52:57.282637 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:52:57.282706 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:52:57.323682 1157708 cri.go:89] found id: ""
	I0318 13:52:57.323707 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.323715 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:52:57.323721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:52:57.323771 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:52:57.364946 1157708 cri.go:89] found id: ""
	I0318 13:52:57.364980 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.364991 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:52:57.365003 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:52:57.365076 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:52:57.407466 1157708 cri.go:89] found id: ""
	I0318 13:52:57.407495 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.407505 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:52:57.407511 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:52:57.407568 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:52:57.454663 1157708 cri.go:89] found id: ""
	I0318 13:52:57.454692 1157708 logs.go:276] 0 containers: []
	W0318 13:52:57.454701 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:52:57.454710 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:52:57.454722 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:52:57.509591 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:52:57.509633 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:52:57.525125 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:52:57.525155 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:52:57.602819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:52:57.602845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:52:57.602863 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:52:57.689001 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:52:57.689045 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:55.150589 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.152149 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:56.708493 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.208099 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:57.311780 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:52:59.312048 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:00.234252 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:00.249526 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:00.249615 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:00.290131 1157708 cri.go:89] found id: ""
	I0318 13:53:00.290160 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.290171 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:00.290178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:00.290230 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:00.337794 1157708 cri.go:89] found id: ""
	I0318 13:53:00.337828 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.337840 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:00.337848 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:00.337907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:00.378188 1157708 cri.go:89] found id: ""
	I0318 13:53:00.378224 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.378236 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:00.378244 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:00.378313 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:00.418940 1157708 cri.go:89] found id: ""
	I0318 13:53:00.418972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.418981 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:00.418987 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:00.419039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:00.461471 1157708 cri.go:89] found id: ""
	I0318 13:53:00.461502 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.461511 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:00.461518 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:00.461572 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:00.498781 1157708 cri.go:89] found id: ""
	I0318 13:53:00.498812 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.498821 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:00.498827 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:00.498885 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:00.540359 1157708 cri.go:89] found id: ""
	I0318 13:53:00.540395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.540407 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:00.540414 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:00.540480 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:00.583597 1157708 cri.go:89] found id: ""
	I0318 13:53:00.583628 1157708 logs.go:276] 0 containers: []
	W0318 13:53:00.583636 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:00.583648 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:00.583666 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:00.639498 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:00.639534 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:00.655764 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:00.655792 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:00.742351 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:00.742386 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:00.742400 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:00.825250 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:00.825298 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:52:59.651495 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.651843 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.709438 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.208439 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:01.810519 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:04.308525 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:03.373938 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:03.389723 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:03.389796 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:03.429675 1157708 cri.go:89] found id: ""
	I0318 13:53:03.429710 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.429723 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:03.429732 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:03.429803 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:03.468732 1157708 cri.go:89] found id: ""
	I0318 13:53:03.468768 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.468780 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:03.468788 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:03.468841 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:03.510562 1157708 cri.go:89] found id: ""
	I0318 13:53:03.510589 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.510598 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:03.510604 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:03.510667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:03.549842 1157708 cri.go:89] found id: ""
	I0318 13:53:03.549896 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.549909 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:03.549918 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:03.549984 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:03.590036 1157708 cri.go:89] found id: ""
	I0318 13:53:03.590076 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.590086 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:03.590093 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:03.590146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:03.635546 1157708 cri.go:89] found id: ""
	I0318 13:53:03.635573 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.635585 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:03.635593 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:03.635660 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:03.678634 1157708 cri.go:89] found id: ""
	I0318 13:53:03.678663 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.678671 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:03.678677 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:03.678735 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:03.719666 1157708 cri.go:89] found id: ""
	I0318 13:53:03.719698 1157708 logs.go:276] 0 containers: []
	W0318 13:53:03.719709 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:03.719721 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:03.719736 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:03.762353 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:03.762388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:03.817484 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:03.817521 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:03.832820 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:03.832850 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:03.913094 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:03.913115 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:03.913130 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:06.502556 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:06.517682 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:06.517745 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:06.562167 1157708 cri.go:89] found id: ""
	I0318 13:53:06.562202 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.562215 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:06.562223 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:06.562294 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:06.601910 1157708 cri.go:89] found id: ""
	I0318 13:53:06.601945 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.601954 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:06.601962 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:06.602022 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:06.640652 1157708 cri.go:89] found id: ""
	I0318 13:53:06.640683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.640694 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:06.640702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:06.640778 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:06.686781 1157708 cri.go:89] found id: ""
	I0318 13:53:06.686809 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.686818 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:06.686824 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:06.686893 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:06.727080 1157708 cri.go:89] found id: ""
	I0318 13:53:06.727107 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.727115 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:06.727121 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:06.727173 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:06.764550 1157708 cri.go:89] found id: ""
	I0318 13:53:06.764575 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.764583 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:06.764589 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:06.764641 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:06.803978 1157708 cri.go:89] found id: ""
	I0318 13:53:06.804009 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.804019 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:06.804027 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:06.804091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:06.843983 1157708 cri.go:89] found id: ""
	I0318 13:53:06.844016 1157708 logs.go:276] 0 containers: []
	W0318 13:53:06.844027 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:06.844040 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:06.844058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:06.905389 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:06.905424 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:06.956888 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:06.956924 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:06.973551 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:06.973594 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:07.045945 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:07.045973 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:07.045991 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:04.150852 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.151454 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.656073 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.211223 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:08.707939 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:06.808218 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.309991 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:11.310190 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:09.635227 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:09.650166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:09.650246 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:09.695126 1157708 cri.go:89] found id: ""
	I0318 13:53:09.695153 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.695162 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:09.695168 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:09.695221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:09.740475 1157708 cri.go:89] found id: ""
	I0318 13:53:09.740507 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.740516 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:09.740522 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:09.740591 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:09.779078 1157708 cri.go:89] found id: ""
	I0318 13:53:09.779108 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.779119 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:09.779128 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:09.779186 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:09.821252 1157708 cri.go:89] found id: ""
	I0318 13:53:09.821285 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.821297 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:09.821306 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:09.821376 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:09.860500 1157708 cri.go:89] found id: ""
	I0318 13:53:09.860537 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.860550 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:09.860558 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:09.860622 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:09.903447 1157708 cri.go:89] found id: ""
	I0318 13:53:09.903475 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.903486 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:09.903494 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:09.903550 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:09.941620 1157708 cri.go:89] found id: ""
	I0318 13:53:09.941648 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.941661 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:09.941679 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:09.941731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:09.980066 1157708 cri.go:89] found id: ""
	I0318 13:53:09.980101 1157708 logs.go:276] 0 containers: []
	W0318 13:53:09.980113 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:09.980125 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:09.980142 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:10.036960 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:10.037000 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:10.051329 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:10.051361 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:10.130896 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:10.130925 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:10.130942 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:10.212205 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:10.212236 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:12.754623 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:12.769956 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:12.770034 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:12.809006 1157708 cri.go:89] found id: ""
	I0318 13:53:12.809032 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.809043 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:12.809051 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:12.809113 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:12.852354 1157708 cri.go:89] found id: ""
	I0318 13:53:12.852390 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.852400 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:12.852407 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:12.852476 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:12.891891 1157708 cri.go:89] found id: ""
	I0318 13:53:12.891923 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.891933 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:12.891940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:12.891991 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:12.931753 1157708 cri.go:89] found id: ""
	I0318 13:53:12.931785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.931795 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:12.931803 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:12.931872 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:12.971622 1157708 cri.go:89] found id: ""
	I0318 13:53:12.971653 1157708 logs.go:276] 0 containers: []
	W0318 13:53:12.971662 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:12.971669 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:12.971731 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:11.151234 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.157081 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:10.708177 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.209203 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.315183 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.808738 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:13.009893 1157708 cri.go:89] found id: ""
	I0318 13:53:13.009930 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.009943 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:13.009952 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:13.010021 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:13.045361 1157708 cri.go:89] found id: ""
	I0318 13:53:13.045396 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.045404 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:13.045411 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:13.045474 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:13.087659 1157708 cri.go:89] found id: ""
	I0318 13:53:13.087686 1157708 logs.go:276] 0 containers: []
	W0318 13:53:13.087696 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:13.087706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:13.087721 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:13.129979 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:13.130014 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:13.183802 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:13.183836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:13.198808 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:13.198840 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:13.272736 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:13.272764 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:13.272783 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:15.870196 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:15.887480 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:15.887551 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:15.923871 1157708 cri.go:89] found id: ""
	I0318 13:53:15.923899 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.923907 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:15.923913 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:15.923976 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:15.963870 1157708 cri.go:89] found id: ""
	I0318 13:53:15.963906 1157708 logs.go:276] 0 containers: []
	W0318 13:53:15.963917 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:15.963925 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:15.963997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:16.009781 1157708 cri.go:89] found id: ""
	I0318 13:53:16.009815 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.009828 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:16.009837 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:16.009905 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:16.047673 1157708 cri.go:89] found id: ""
	I0318 13:53:16.047708 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.047718 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:16.047727 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:16.047793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:16.089419 1157708 cri.go:89] found id: ""
	I0318 13:53:16.089447 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.089455 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:16.089461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:16.089511 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:16.133563 1157708 cri.go:89] found id: ""
	I0318 13:53:16.133594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.133604 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:16.133611 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:16.133685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:16.174369 1157708 cri.go:89] found id: ""
	I0318 13:53:16.174404 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.174415 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:16.174423 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:16.174491 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:16.219334 1157708 cri.go:89] found id: ""
	I0318 13:53:16.219360 1157708 logs.go:276] 0 containers: []
	W0318 13:53:16.219367 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:16.219376 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:16.219389 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:16.273468 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:16.273507 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:16.288584 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:16.288612 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:16.366575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:16.366602 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:16.366620 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:16.451031 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:16.451071 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:15.650907 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.151434 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:15.708015 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:17.710036 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.311437 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.807854 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:18.997536 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:19.014995 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:19.015065 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:19.064686 1157708 cri.go:89] found id: ""
	I0318 13:53:19.064719 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.064731 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:19.064739 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:19.064793 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:19.110598 1157708 cri.go:89] found id: ""
	I0318 13:53:19.110629 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.110640 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:19.110648 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:19.110739 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:19.156628 1157708 cri.go:89] found id: ""
	I0318 13:53:19.156652 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.156660 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:19.156668 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:19.156730 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:19.205993 1157708 cri.go:89] found id: ""
	I0318 13:53:19.206029 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.206042 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:19.206049 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:19.206118 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:19.253902 1157708 cri.go:89] found id: ""
	I0318 13:53:19.253935 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.253952 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:19.253960 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:19.254036 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:19.296550 1157708 cri.go:89] found id: ""
	I0318 13:53:19.296583 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.296594 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:19.296602 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:19.296667 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:19.337316 1157708 cri.go:89] found id: ""
	I0318 13:53:19.337349 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.337360 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:19.337369 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:19.337446 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:19.381503 1157708 cri.go:89] found id: ""
	I0318 13:53:19.381546 1157708 logs.go:276] 0 containers: []
	W0318 13:53:19.381565 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:19.381579 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:19.381603 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:19.461665 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:19.461691 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:19.461707 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:19.548291 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:19.548348 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:19.591296 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:19.591335 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:19.648740 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:19.648776 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.164970 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:22.180740 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:22.180806 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:22.223787 1157708 cri.go:89] found id: ""
	I0318 13:53:22.223820 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.223833 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:22.223840 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:22.223908 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:22.266751 1157708 cri.go:89] found id: ""
	I0318 13:53:22.266785 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.266797 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:22.266805 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:22.266876 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:22.311669 1157708 cri.go:89] found id: ""
	I0318 13:53:22.311701 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.311712 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:22.311721 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:22.311816 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:22.354687 1157708 cri.go:89] found id: ""
	I0318 13:53:22.354722 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.354733 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:22.354742 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:22.354807 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:22.395741 1157708 cri.go:89] found id: ""
	I0318 13:53:22.395767 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.395776 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:22.395782 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:22.395832 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:22.434506 1157708 cri.go:89] found id: ""
	I0318 13:53:22.434539 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.434550 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:22.434559 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:22.434612 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:22.474583 1157708 cri.go:89] found id: ""
	I0318 13:53:22.474612 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.474621 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:22.474627 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:22.474690 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:22.521898 1157708 cri.go:89] found id: ""
	I0318 13:53:22.521943 1157708 logs.go:276] 0 containers: []
	W0318 13:53:22.521955 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:22.521968 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:22.521989 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:22.537679 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:22.537711 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:22.619575 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:22.619605 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:22.619621 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:22.704206 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:22.704265 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:22.753470 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:22.753502 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:20.650340 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.653036 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:20.213398 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.709150 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:22.808837 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.308831 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.311578 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:25.329917 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:25.329979 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:25.373784 1157708 cri.go:89] found id: ""
	I0318 13:53:25.373818 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.373826 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:25.373833 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:25.373901 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:25.422490 1157708 cri.go:89] found id: ""
	I0318 13:53:25.422516 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.422526 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:25.422532 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:25.422597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:25.459523 1157708 cri.go:89] found id: ""
	I0318 13:53:25.459552 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.459560 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:25.459567 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:25.459627 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:25.495647 1157708 cri.go:89] found id: ""
	I0318 13:53:25.495683 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.495695 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:25.495702 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:25.495772 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:25.534582 1157708 cri.go:89] found id: ""
	I0318 13:53:25.534617 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.534626 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:25.534632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:25.534704 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:25.577526 1157708 cri.go:89] found id: ""
	I0318 13:53:25.577558 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.577566 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:25.577573 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:25.577687 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:25.616403 1157708 cri.go:89] found id: ""
	I0318 13:53:25.616433 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.616445 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:25.616453 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:25.616527 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:25.660444 1157708 cri.go:89] found id: ""
	I0318 13:53:25.660474 1157708 logs.go:276] 0 containers: []
	W0318 13:53:25.660482 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:25.660492 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:25.660506 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:25.715595 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:25.715641 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:25.730358 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:25.730390 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:25.803153 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:25.803239 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:25.803261 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:25.885339 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:25.885388 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:25.150276 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.151389 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:25.214042 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.710185 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:27.807095 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:29.807177 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:28.433506 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:28.449402 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:28.449481 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:28.490972 1157708 cri.go:89] found id: ""
	I0318 13:53:28.491007 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.491019 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:28.491028 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:28.491094 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:28.531406 1157708 cri.go:89] found id: ""
	I0318 13:53:28.531439 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.531451 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:28.531460 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:28.531513 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:28.570299 1157708 cri.go:89] found id: ""
	I0318 13:53:28.570334 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.570345 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:28.570352 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:28.570408 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:28.607950 1157708 cri.go:89] found id: ""
	I0318 13:53:28.607979 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.607987 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:28.607994 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:28.608066 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:28.648710 1157708 cri.go:89] found id: ""
	I0318 13:53:28.648744 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.648755 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:28.648762 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:28.648830 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:28.691071 1157708 cri.go:89] found id: ""
	I0318 13:53:28.691102 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.691114 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:28.691122 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:28.691183 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:28.734399 1157708 cri.go:89] found id: ""
	I0318 13:53:28.734438 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.734452 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:28.734461 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:28.734548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:28.774859 1157708 cri.go:89] found id: ""
	I0318 13:53:28.774891 1157708 logs.go:276] 0 containers: []
	W0318 13:53:28.774902 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:28.774912 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:28.774927 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:28.831420 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:28.831459 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:28.847970 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:28.848008 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:28.926007 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:28.926034 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:28.926051 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:29.007525 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:29.007577 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:31.555401 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:31.570964 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:31.571046 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:31.611400 1157708 cri.go:89] found id: ""
	I0318 13:53:31.611427 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.611438 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:31.611445 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:31.611510 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:31.654572 1157708 cri.go:89] found id: ""
	I0318 13:53:31.654602 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.654614 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:31.654622 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:31.654725 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:31.692649 1157708 cri.go:89] found id: ""
	I0318 13:53:31.692673 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.692681 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:31.692686 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:31.692748 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:31.732208 1157708 cri.go:89] found id: ""
	I0318 13:53:31.732233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.732244 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:31.732253 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:31.732320 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:31.774132 1157708 cri.go:89] found id: ""
	I0318 13:53:31.774163 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.774172 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:31.774178 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:31.774234 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:31.813558 1157708 cri.go:89] found id: ""
	I0318 13:53:31.813582 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.813590 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:31.813597 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:31.813651 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:31.862024 1157708 cri.go:89] found id: ""
	I0318 13:53:31.862057 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.862070 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:31.862077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:31.862146 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:31.903941 1157708 cri.go:89] found id: ""
	I0318 13:53:31.903972 1157708 logs.go:276] 0 containers: []
	W0318 13:53:31.903982 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:31.903992 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:31.904006 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:31.957327 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:31.957366 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:31.973337 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:31.973380 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:32.053702 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:32.053730 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:32.053744 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:32.134859 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:32.134911 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:29.649648 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.651426 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.651936 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:30.208512 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:32.709020 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:31.808276 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:33.811370 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:36.314374 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:34.683335 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:34.700383 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:34.700490 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:34.744387 1157708 cri.go:89] found id: ""
	I0318 13:53:34.744420 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.744432 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:34.744441 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:34.744509 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:34.788122 1157708 cri.go:89] found id: ""
	I0318 13:53:34.788150 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.788160 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:34.788166 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:34.788221 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:34.834760 1157708 cri.go:89] found id: ""
	I0318 13:53:34.834795 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.834808 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:34.834817 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:34.834894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:34.882028 1157708 cri.go:89] found id: ""
	I0318 13:53:34.882062 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.882073 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:34.882081 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:34.882150 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:34.933339 1157708 cri.go:89] found id: ""
	I0318 13:53:34.933364 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.933374 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:34.933384 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:34.933451 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:34.972362 1157708 cri.go:89] found id: ""
	I0318 13:53:34.972395 1157708 logs.go:276] 0 containers: []
	W0318 13:53:34.972407 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:34.972416 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:34.972486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:35.008949 1157708 cri.go:89] found id: ""
	I0318 13:53:35.008986 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.008999 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:35.009007 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:35.009080 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:35.054698 1157708 cri.go:89] found id: ""
	I0318 13:53:35.054733 1157708 logs.go:276] 0 containers: []
	W0318 13:53:35.054742 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:35.054756 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:35.054770 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:35.109391 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:35.109450 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:35.126785 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:35.126818 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:35.214303 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:35.214329 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:35.214342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:35.298705 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:35.298750 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:37.843701 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:37.859330 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:37.859415 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:37.903428 1157708 cri.go:89] found id: ""
	I0318 13:53:37.903466 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.903479 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:37.903497 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:37.903560 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:37.943687 1157708 cri.go:89] found id: ""
	I0318 13:53:37.943716 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.943727 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:37.943735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:37.943804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:37.986201 1157708 cri.go:89] found id: ""
	I0318 13:53:37.986233 1157708 logs.go:276] 0 containers: []
	W0318 13:53:37.986244 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:37.986252 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:37.986322 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:36.151976 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.152281 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:35.209205 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:37.709122 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.806794 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.807552 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:38.026776 1157708 cri.go:89] found id: ""
	I0318 13:53:38.026813 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.026825 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:38.026832 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:38.026907 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:38.073057 1157708 cri.go:89] found id: ""
	I0318 13:53:38.073088 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.073098 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:38.073105 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:38.073172 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:38.110576 1157708 cri.go:89] found id: ""
	I0318 13:53:38.110611 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.110624 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:38.110632 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:38.110702 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:38.154293 1157708 cri.go:89] found id: ""
	I0318 13:53:38.154319 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.154327 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:38.154338 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:38.154414 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:38.195407 1157708 cri.go:89] found id: ""
	I0318 13:53:38.195434 1157708 logs.go:276] 0 containers: []
	W0318 13:53:38.195444 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:38.195454 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:38.195469 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:38.254159 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:38.254210 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:38.269143 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:38.269175 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:38.349819 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:38.349845 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:38.349864 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:38.435121 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:38.435164 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.982438 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:40.998483 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:40.998559 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:41.037470 1157708 cri.go:89] found id: ""
	I0318 13:53:41.037497 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.037506 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:41.037512 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:41.037583 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:41.078428 1157708 cri.go:89] found id: ""
	I0318 13:53:41.078463 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.078473 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:41.078482 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:41.078548 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:41.121342 1157708 cri.go:89] found id: ""
	I0318 13:53:41.121371 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.121382 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:41.121391 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:41.121482 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:41.164124 1157708 cri.go:89] found id: ""
	I0318 13:53:41.164149 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.164159 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:41.164167 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:41.164229 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:41.210294 1157708 cri.go:89] found id: ""
	I0318 13:53:41.210321 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.210329 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:41.210336 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:41.210407 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:41.253934 1157708 cri.go:89] found id: ""
	I0318 13:53:41.253957 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.253967 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:41.253973 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:41.254039 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:41.298817 1157708 cri.go:89] found id: ""
	I0318 13:53:41.298849 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.298861 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:41.298870 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:41.298936 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:41.344109 1157708 cri.go:89] found id: ""
	I0318 13:53:41.344137 1157708 logs.go:276] 0 containers: []
	W0318 13:53:41.344146 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:41.344156 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:41.344170 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:41.401026 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:41.401061 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:41.416197 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:41.416229 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:41.495349 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:41.495375 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:41.495393 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:41.578201 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:41.578253 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:40.651687 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:43.152619 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:40.208445 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.208613 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.210573 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:42.808665 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:45.309099 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:44.126601 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:44.140971 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:44.141048 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:44.184758 1157708 cri.go:89] found id: ""
	I0318 13:53:44.184786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.184794 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:44.184801 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:44.184851 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:44.230793 1157708 cri.go:89] found id: ""
	I0318 13:53:44.230824 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.230836 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:44.230842 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:44.230916 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:44.269561 1157708 cri.go:89] found id: ""
	I0318 13:53:44.269594 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.269606 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:44.269614 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:44.269680 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:44.310847 1157708 cri.go:89] found id: ""
	I0318 13:53:44.310878 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.310889 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:44.310898 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:44.310970 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:44.350827 1157708 cri.go:89] found id: ""
	I0318 13:53:44.350860 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.350878 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:44.350887 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:44.350956 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:44.389693 1157708 cri.go:89] found id: ""
	I0318 13:53:44.389721 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.389730 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:44.389735 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:44.389804 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:44.429254 1157708 cri.go:89] found id: ""
	I0318 13:53:44.429280 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.429289 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:44.429303 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:44.429354 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:44.468484 1157708 cri.go:89] found id: ""
	I0318 13:53:44.468513 1157708 logs.go:276] 0 containers: []
	W0318 13:53:44.468525 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:44.468538 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:44.468555 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:44.525012 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:44.525058 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:44.541638 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:44.541668 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:44.621779 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:44.621801 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:44.621814 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:44.706797 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:44.706884 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:47.253569 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:47.268808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:47.268888 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:47.313191 1157708 cri.go:89] found id: ""
	I0318 13:53:47.313220 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.313232 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:47.313240 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:47.313307 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:47.357567 1157708 cri.go:89] found id: ""
	I0318 13:53:47.357600 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.357611 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:47.357619 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:47.357688 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:47.392300 1157708 cri.go:89] found id: ""
	I0318 13:53:47.392341 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.392352 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:47.392366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:47.392437 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:47.432800 1157708 cri.go:89] found id: ""
	I0318 13:53:47.432830 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.432842 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:47.432857 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:47.432921 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:47.469563 1157708 cri.go:89] found id: ""
	I0318 13:53:47.469591 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.469599 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:47.469605 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:47.469668 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:47.508770 1157708 cri.go:89] found id: ""
	I0318 13:53:47.508799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.508810 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:47.508820 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:47.508880 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:47.549876 1157708 cri.go:89] found id: ""
	I0318 13:53:47.549909 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.549921 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:47.549930 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:47.549997 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:47.591385 1157708 cri.go:89] found id: ""
	I0318 13:53:47.591413 1157708 logs.go:276] 0 containers: []
	W0318 13:53:47.591421 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:47.591431 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:47.591446 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:47.646284 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:47.646313 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:47.662609 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:47.662639 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:47.737371 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:47.737398 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:47.737415 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:47.817311 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:47.817342 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:45.652845 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.150199 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:46.707734 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:48.709977 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:47.807238 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.308767 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.363832 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:50.380029 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:50.380109 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:50.427452 1157708 cri.go:89] found id: ""
	I0318 13:53:50.427484 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.427496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:50.427505 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:50.427579 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:50.466766 1157708 cri.go:89] found id: ""
	I0318 13:53:50.466793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.466801 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:50.466808 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:50.466894 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:50.506768 1157708 cri.go:89] found id: ""
	I0318 13:53:50.506799 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.506811 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:50.506819 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:50.506882 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:50.545554 1157708 cri.go:89] found id: ""
	I0318 13:53:50.545592 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.545605 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:50.545613 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:50.545685 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:50.583949 1157708 cri.go:89] found id: ""
	I0318 13:53:50.583984 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.583995 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:50.584004 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:50.584083 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:50.624730 1157708 cri.go:89] found id: ""
	I0318 13:53:50.624763 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.624774 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:50.624783 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:50.624853 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:50.664300 1157708 cri.go:89] found id: ""
	I0318 13:53:50.664346 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.664358 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:50.664366 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:50.664420 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:50.702760 1157708 cri.go:89] found id: ""
	I0318 13:53:50.702793 1157708 logs.go:276] 0 containers: []
	W0318 13:53:50.702805 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:50.702817 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:50.702833 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:50.757188 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:50.757237 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:50.772151 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:50.772195 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:50.856872 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:50.856898 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:50.856917 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:50.937706 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:50.937749 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:50.654814 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.151970 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:50.710233 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.209443 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:52.309529 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:54.809399 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:53.481836 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:53.497792 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:53:53.497856 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:53:53.535376 1157708 cri.go:89] found id: ""
	I0318 13:53:53.535411 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.535420 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:53:53.535427 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:53:53.535486 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:53:53.575002 1157708 cri.go:89] found id: ""
	I0318 13:53:53.575030 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.575042 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:53:53.575050 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:53:53.575119 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:53:53.615880 1157708 cri.go:89] found id: ""
	I0318 13:53:53.615919 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.615931 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:53:53.615940 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:53:53.616007 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:53:53.681746 1157708 cri.go:89] found id: ""
	I0318 13:53:53.681786 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.681799 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:53:53.681810 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:53:53.681887 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:53:53.725219 1157708 cri.go:89] found id: ""
	I0318 13:53:53.725241 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.725250 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:53:53.725256 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:53:53.725317 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:53:53.766969 1157708 cri.go:89] found id: ""
	I0318 13:53:53.767006 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.767018 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:53:53.767026 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:53:53.767091 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:53:53.802103 1157708 cri.go:89] found id: ""
	I0318 13:53:53.802134 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.802145 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:53:53.802157 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:53:53.802210 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:53:53.843054 1157708 cri.go:89] found id: ""
	I0318 13:53:53.843085 1157708 logs.go:276] 0 containers: []
	W0318 13:53:53.843093 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:53:53.843103 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:53:53.843117 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:53:53.899794 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:53:53.899836 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:53:53.915559 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:53:53.915592 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:53:53.996410 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0318 13:53:53.996438 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:53:53.996456 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:53:54.085588 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:53:54.085628 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:53:56.632201 1157708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:53:56.648183 1157708 kubeadm.go:591] duration metric: took 4m3.550073086s to restartPrimaryControlPlane
	W0318 13:53:56.648381 1157708 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:53:56.648422 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:53:55.152626 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.650951 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:55.209511 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:57.709324 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.710029 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.666187 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (3.017736279s)
	I0318 13:53:59.666270 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:53:59.682887 1157708 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:53:59.694626 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:53:59.706577 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:53:59.706599 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:53:59.706648 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:53:59.718311 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:53:59.718371 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:53:59.729298 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:53:59.741351 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:53:59.741401 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:53:59.753652 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.765642 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:53:59.765695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:53:59.778055 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:53:59.789994 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:53:59.790042 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:53:59.801292 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:53:59.879414 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:53:59.879516 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:00.046477 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:00.046660 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:00.046819 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:00.257070 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:00.259191 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:00.259333 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:00.259434 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:00.259549 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:00.259658 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:00.259782 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:00.259857 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:00.259949 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:00.260033 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:00.260136 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:00.260244 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:00.260299 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:00.260394 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:00.423400 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:00.543983 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:00.796108 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:00.901121 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:00.918891 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:00.920502 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:00.920642 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:01.094176 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:53:57.306878 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:53:59.308670 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:01.096397 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:54:01.096539 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:01.107816 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:01.108753 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:01.109641 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:01.111913 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:00.150985 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.151139 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:02.208577 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.209527 1157416 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.701940 1157416 pod_ready.go:81] duration metric: took 4m0.000915275s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:04.701995 1157416 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hhh5m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:04.702022 1157416 pod_ready.go:38] duration metric: took 4m12.048388069s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:04.702063 1157416 kubeadm.go:591] duration metric: took 4m22.220919415s to restartPrimaryControlPlane
	W0318 13:54:04.702133 1157416 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:04.702168 1157416 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:01.807445 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.308435 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:04.151252 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.152296 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.162574 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:06.809148 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:08.811335 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:11.306999 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:10.650696 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:12.651741 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:13.308835 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.807754 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:15.150875 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:17.653698 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:18.308137 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.308720 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:20.152545 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.650685 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:22.807655 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:24.807765 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:25.150664 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:27.650092 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:26.808311 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:29.311683 1157887 pod_ready.go:102] pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:31.301320 1157887 pod_ready.go:81] duration metric: took 4m0.001048401s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:31.301351 1157887 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-2sb4m" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:31.301372 1157887 pod_ready.go:38] duration metric: took 4m12.063560637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:31.301397 1157887 kubeadm.go:591] duration metric: took 4m19.202321881s to restartPrimaryControlPlane
	W0318 13:54:31.301478 1157887 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:31.301505 1157887 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:29.651334 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:32.152059 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:34.651230 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.151130 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:37.018723 1157416 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.31652367s)
	I0318 13:54:37.018822 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:54:37.036348 1157416 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:54:37.047932 1157416 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:54:37.058846 1157416 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:54:37.058875 1157416 kubeadm.go:156] found existing configuration files:
	
	I0318 13:54:37.058920 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:54:37.069333 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:54:37.069396 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:54:37.080053 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:54:37.090110 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:54:37.090170 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:54:37.101032 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.111052 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:54:37.111124 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:54:37.121867 1157416 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:54:37.132057 1157416 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:54:37.132104 1157416 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:54:37.143057 1157416 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:54:37.368813 1157416 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:54:41.111826 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:54:41.111977 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:41.112236 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:39.151250 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:41.652026 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:43.652929 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.082340 1157416 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0318 13:54:46.082410 1157416 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:54:46.082482 1157416 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:54:46.082561 1157416 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:54:46.082639 1157416 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:54:46.082692 1157416 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:54:46.084374 1157416 out.go:204]   - Generating certificates and keys ...
	I0318 13:54:46.084495 1157416 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:54:46.084584 1157416 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:54:46.084681 1157416 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:54:46.084767 1157416 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:54:46.084844 1157416 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:54:46.084933 1157416 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:54:46.085039 1157416 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:54:46.085131 1157416 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:54:46.085255 1157416 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:54:46.085344 1157416 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:54:46.085415 1157416 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:54:46.085491 1157416 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:54:46.085569 1157416 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:54:46.085637 1157416 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0318 13:54:46.085704 1157416 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:54:46.085791 1157416 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:54:46.085894 1157416 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:54:46.086010 1157416 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:54:46.086104 1157416 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:54:46.087481 1157416 out.go:204]   - Booting up control plane ...
	I0318 13:54:46.087576 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:54:46.087642 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:54:46.087698 1157416 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:54:46.087782 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:54:46.087865 1157416 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:54:46.087917 1157416 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:54:46.088051 1157416 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:54:46.088146 1157416 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003020 seconds
	I0318 13:54:46.088306 1157416 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:54:46.088501 1157416 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:54:46.088585 1157416 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:54:46.088770 1157416 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-537236 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:54:46.088826 1157416 kubeadm.go:309] [bootstrap-token] Using token: fk6yfh.vd0dmh72kd97vm2h
	I0318 13:54:46.091265 1157416 out.go:204]   - Configuring RBAC rules ...
	I0318 13:54:46.091375 1157416 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:54:46.091449 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:54:46.091656 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:54:46.091839 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:54:46.092014 1157416 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:54:46.092136 1157416 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:54:46.092289 1157416 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:54:46.092370 1157416 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:54:46.092436 1157416 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:54:46.092445 1157416 kubeadm.go:309] 
	I0318 13:54:46.092513 1157416 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:54:46.092522 1157416 kubeadm.go:309] 
	I0318 13:54:46.092588 1157416 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:54:46.092594 1157416 kubeadm.go:309] 
	I0318 13:54:46.092614 1157416 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:54:46.092704 1157416 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:54:46.092749 1157416 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:54:46.092755 1157416 kubeadm.go:309] 
	I0318 13:54:46.092805 1157416 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:54:46.092818 1157416 kubeadm.go:309] 
	I0318 13:54:46.092892 1157416 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:54:46.092906 1157416 kubeadm.go:309] 
	I0318 13:54:46.092982 1157416 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:54:46.093100 1157416 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:54:46.093212 1157416 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:54:46.093225 1157416 kubeadm.go:309] 
	I0318 13:54:46.093335 1157416 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:54:46.093448 1157416 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:54:46.093457 1157416 kubeadm.go:309] 
	I0318 13:54:46.093539 1157416 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.093684 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:54:46.093717 1157416 kubeadm.go:309] 	--control-plane 
	I0318 13:54:46.093723 1157416 kubeadm.go:309] 
	I0318 13:54:46.093848 1157416 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:54:46.093860 1157416 kubeadm.go:309] 
	I0318 13:54:46.093946 1157416 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token fk6yfh.vd0dmh72kd97vm2h \
	I0318 13:54:46.094071 1157416 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:54:46.094105 1157416 cni.go:84] Creating CNI manager for ""
	I0318 13:54:46.094119 1157416 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:54:46.095717 1157416 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:54:46.112502 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:46.112797 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:46.152713 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:48.651676 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:46.096953 1157416 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:54:46.127007 1157416 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:54:46.178588 1157416 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:54:46.178768 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:46.178785 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-537236 minikube.k8s.io/updated_at=2024_03_18T13_54_46_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=no-preload-537236 minikube.k8s.io/primary=true
	I0318 13:54:46.231974 1157416 ops.go:34] apiserver oom_adj: -16
	I0318 13:54:46.582048 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.082295 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:47.582447 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.082146 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:48.583155 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.082463 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:49.583104 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.153753 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:53.654740 1157263 pod_ready.go:102] pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace has status "Ready":"False"
	I0318 13:54:50.082163 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:50.582159 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.082921 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:51.582616 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.082686 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:52.582520 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.082920 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:53.582281 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.082711 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:54.582110 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.112956 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:54:56.113210 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:54:55.082805 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:55.583034 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.082777 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:56.582491 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.082739 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:57.582854 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.082715 1157416 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:54:58.189802 1157416 kubeadm.go:1107] duration metric: took 12.011111335s to wait for elevateKubeSystemPrivileges
	W0318 13:54:58.189865 1157416 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:54:58.189878 1157416 kubeadm.go:393] duration metric: took 5m15.77131157s to StartCluster
	I0318 13:54:58.189991 1157416 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.190130 1157416 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:54:58.191965 1157416 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:54:58.192315 1157416 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.7 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:54:58.194158 1157416 out.go:177] * Verifying Kubernetes components...
	I0318 13:54:58.192460 1157416 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:54:58.192549 1157416 config.go:182] Loaded profile config "no-preload-537236": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0318 13:54:58.194270 1157416 addons.go:69] Setting storage-provisioner=true in profile "no-preload-537236"
	I0318 13:54:58.195604 1157416 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:54:58.195628 1157416 addons.go:234] Setting addon storage-provisioner=true in "no-preload-537236"
	W0318 13:54:58.195646 1157416 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:54:58.194275 1157416 addons.go:69] Setting default-storageclass=true in profile "no-preload-537236"
	I0318 13:54:58.195741 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.195748 1157416 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-537236"
	I0318 13:54:58.194278 1157416 addons.go:69] Setting metrics-server=true in profile "no-preload-537236"
	I0318 13:54:58.195816 1157416 addons.go:234] Setting addon metrics-server=true in "no-preload-537236"
	W0318 13:54:58.195835 1157416 addons.go:243] addon metrics-server should already be in state true
	I0318 13:54:58.195864 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.196133 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196177 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196187 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196224 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.196236 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.196256 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.218212 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36677
	I0318 13:54:58.218703 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34827
	I0318 13:54:58.218934 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35455
	I0318 13:54:58.219717 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.219858 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220143 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.220417 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220443 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220478 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220497 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220628 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.220650 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.220882 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220950 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.220973 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.221491 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.221527 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.221736 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.222116 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.222138 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.226247 1157416 addons.go:234] Setting addon default-storageclass=true in "no-preload-537236"
	W0318 13:54:58.226271 1157416 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:54:58.226303 1157416 host.go:66] Checking if "no-preload-537236" exists ...
	I0318 13:54:58.226691 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.226719 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.238772 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I0318 13:54:58.239288 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.239925 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.239954 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.240375 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.240581 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.241297 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44327
	I0318 13:54:58.241774 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.242300 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.242321 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.242787 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.243001 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.243033 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.245371 1157416 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:54:58.245038 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.246964 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:54:58.246981 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:54:58.246429 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34901
	I0318 13:54:58.247010 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.248738 1157416 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:54:54.143902 1157263 pod_ready.go:81] duration metric: took 4m0.000627482s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" ...
	E0318 13:54:54.143947 1157263 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-5cv2z" in "kube-system" namespace to be "Ready" (will not retry!)
	I0318 13:54:54.143967 1157263 pod_ready.go:38] duration metric: took 4m9.565422592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:54.143994 1157263 kubeadm.go:591] duration metric: took 4m17.754456341s to restartPrimaryControlPlane
	W0318 13:54:54.144061 1157263 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0318 13:54:54.144092 1157263 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:54:58.247424 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.250418 1157416 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.250441 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:54:58.250459 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.250666 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.250683 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.250733 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251012 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.251354 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.251384 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.251730 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.252053 1157416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:54:58.252082 1157416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:54:58.252627 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.252823 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.252974 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.253647 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254073 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.254102 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.254393 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.254599 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.254720 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.254858 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.275785 1157416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35695
	I0318 13:54:58.276467 1157416 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:54:58.277007 1157416 main.go:141] libmachine: Using API Version  1
	I0318 13:54:58.277037 1157416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:54:58.277396 1157416 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:54:58.277594 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetState
	I0318 13:54:58.279419 1157416 main.go:141] libmachine: (no-preload-537236) Calling .DriverName
	I0318 13:54:58.279699 1157416 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.279719 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:54:58.279740 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHHostname
	I0318 13:54:58.282813 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283168 1157416 main.go:141] libmachine: (no-preload-537236) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:a8:12", ip: ""} in network mk-no-preload-537236: {Iface:virbr1 ExpiryTime:2024-03-18 14:39:38 +0000 UTC Type:0 Mac:52:54:00:21:a8:12 Iaid: IPaddr:192.168.39.7 Prefix:24 Hostname:no-preload-537236 Clientid:01:52:54:00:21:a8:12}
	I0318 13:54:58.283198 1157416 main.go:141] libmachine: (no-preload-537236) DBG | domain no-preload-537236 has defined IP address 192.168.39.7 and MAC address 52:54:00:21:a8:12 in network mk-no-preload-537236
	I0318 13:54:58.283319 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHPort
	I0318 13:54:58.283505 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHKeyPath
	I0318 13:54:58.283643 1157416 main.go:141] libmachine: (no-preload-537236) Calling .GetSSHUsername
	I0318 13:54:58.283826 1157416 sshutil.go:53] new ssh client: &{IP:192.168.39.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/no-preload-537236/id_rsa Username:docker}
	I0318 13:54:58.433881 1157416 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:54:58.466338 1157416 node_ready.go:35] waiting up to 6m0s for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485186 1157416 node_ready.go:49] node "no-preload-537236" has status "Ready":"True"
	I0318 13:54:58.485217 1157416 node_ready.go:38] duration metric: took 18.833477ms for node "no-preload-537236" to be "Ready" ...
	I0318 13:54:58.485230 1157416 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:54:58.527030 1157416 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545133 1157416 pod_ready.go:92] pod "etcd-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.545175 1157416 pod_ready.go:81] duration metric: took 18.11215ms for pod "etcd-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.545191 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560108 1157416 pod_ready.go:92] pod "kube-apiserver-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.560144 1157416 pod_ready.go:81] duration metric: took 14.943161ms for pod "kube-apiserver-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.560159 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.562894 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:54:58.562924 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:54:58.572477 1157416 pod_ready.go:92] pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:54:58.572510 1157416 pod_ready.go:81] duration metric: took 12.342242ms for pod "kube-controller-manager-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.572523 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:54:58.594618 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:54:58.597140 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:54:58.644132 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:54:58.644166 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:54:58.734467 1157416 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:58.734499 1157416 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:54:58.760623 1157416 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:54:59.005259 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005305 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005668 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005692 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.005704 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.005713 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.005981 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.005996 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.006028 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.020654 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.020682 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.022812 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.022814 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.022850 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.979647 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.382455448s)
	I0318 13:54:59.979723 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.979743 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980124 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980223 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.980258 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:54:59.980281 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:54:59.980354 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:54:59.980675 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:54:59.980756 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:54:59.982424 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.270401 1157416 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.509719085s)
	I0318 13:55:00.270464 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.270481 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.272779 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.272794 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.272817 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.272828 1157416 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:00.272837 1157416 main.go:141] libmachine: (no-preload-537236) Calling .Close
	I0318 13:55:00.274705 1157416 main.go:141] libmachine: (no-preload-537236) DBG | Closing plugin on server side
	I0318 13:55:00.274734 1157416 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:00.274759 1157416 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:00.274789 1157416 addons.go:470] Verifying addon metrics-server=true in "no-preload-537236"
	I0318 13:55:00.276931 1157416 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0318 13:55:00.278586 1157416 addons.go:505] duration metric: took 2.086117916s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0318 13:55:00.607578 1157416 pod_ready.go:92] pod "kube-proxy-6c4c5" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.607607 1157416 pod_ready.go:81] duration metric: took 2.035076209s for pod "kube-proxy-6c4c5" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.607620 1157416 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626505 1157416 pod_ready.go:92] pod "kube-scheduler-no-preload-537236" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:00.626531 1157416 pod_ready.go:81] duration metric: took 18.904572ms for pod "kube-scheduler-no-preload-537236" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:00.626540 1157416 pod_ready.go:38] duration metric: took 2.141296876s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:00.626556 1157416 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:00.626612 1157416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:00.677379 1157416 api_server.go:72] duration metric: took 2.484994048s to wait for apiserver process to appear ...
	I0318 13:55:00.677406 1157416 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:00.677426 1157416 api_server.go:253] Checking apiserver healthz at https://192.168.39.7:8443/healthz ...
	I0318 13:55:00.694161 1157416 api_server.go:279] https://192.168.39.7:8443/healthz returned 200:
	ok
	I0318 13:55:00.696445 1157416 api_server.go:141] control plane version: v1.29.0-rc.2
	I0318 13:55:00.696479 1157416 api_server.go:131] duration metric: took 19.065082ms to wait for apiserver health ...
	I0318 13:55:00.696492 1157416 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:00.707383 1157416 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:00.707417 1157416 system_pods.go:61] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:00.707421 1157416 system_pods.go:61] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:00.707425 1157416 system_pods.go:61] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:00.707429 1157416 system_pods.go:61] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:00.707432 1157416 system_pods.go:61] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:00.707435 1157416 system_pods.go:61] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:00.707438 1157416 system_pods.go:61] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:00.707445 1157416 system_pods.go:61] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:00.707450 1157416 system_pods.go:61] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:00.707459 1157416 system_pods.go:74] duration metric: took 10.96036ms to wait for pod list to return data ...
	I0318 13:55:00.707467 1157416 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:00.870267 1157416 default_sa.go:45] found service account: "default"
	I0318 13:55:00.870299 1157416 default_sa.go:55] duration metric: took 162.825175ms for default service account to be created ...
	I0318 13:55:00.870310 1157416 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:01.073950 1157416 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:01.073985 1157416 system_pods.go:89] "coredns-76f75df574-bhh4k" [6d6f9b9a-2f7e-46bc-9224-57dc077e444d] Running
	I0318 13:55:01.073992 1157416 system_pods.go:89] "coredns-76f75df574-grqdt" [f4ce5620-c97b-4ecd-baba-c5fc840b8127] Running
	I0318 13:55:01.073998 1157416 system_pods.go:89] "etcd-no-preload-537236" [ed8a1ea0-0ec7-4604-b9c9-3738a4569e02] Running
	I0318 13:55:01.074004 1157416 system_pods.go:89] "kube-apiserver-no-preload-537236" [5718ec63-58e7-463b-812b-a806e9fbbdd8] Running
	I0318 13:55:01.074010 1157416 system_pods.go:89] "kube-controller-manager-no-preload-537236" [4ff64d2e-9e89-44d6-9e8f-fa1440fc416a] Running
	I0318 13:55:01.074017 1157416 system_pods.go:89] "kube-proxy-6c4c5" [2dd6fcfc-7510-418d-baab-a0ec364391c1] Running
	I0318 13:55:01.074035 1157416 system_pods.go:89] "kube-scheduler-no-preload-537236" [b8c3f8b7-fc27-4647-880a-f82457de3a27] Running
	I0318 13:55:01.074055 1157416 system_pods.go:89] "metrics-server-57f55c9bc5-tkq6h" [14e262de-fd94-4888-96ab-75823109c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:01.074069 1157416 system_pods.go:89] "storage-provisioner" [f02049f6-a08f-45ac-b285-cbdbb260ab59] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:01.074085 1157416 system_pods.go:126] duration metric: took 203.766894ms to wait for k8s-apps to be running ...
	I0318 13:55:01.074100 1157416 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:01.074152 1157416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:01.091165 1157416 system_svc.go:56] duration metric: took 17.056217ms WaitForService to wait for kubelet
	I0318 13:55:01.091195 1157416 kubeadm.go:576] duration metric: took 2.898817514s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:01.091224 1157416 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:01.270664 1157416 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:01.270724 1157416 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:01.270737 1157416 node_conditions.go:105] duration metric: took 179.506857ms to run NodePressure ...
	I0318 13:55:01.270750 1157416 start.go:240] waiting for startup goroutines ...
	I0318 13:55:01.270758 1157416 start.go:245] waiting for cluster config update ...
	I0318 13:55:01.270769 1157416 start.go:254] writing updated cluster config ...
	I0318 13:55:01.271069 1157416 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:01.325353 1157416 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0318 13:55:01.327367 1157416 out.go:177] * Done! kubectl is now configured to use "no-preload-537236" cluster and "default" namespace by default
	I0318 13:55:03.715412 1157887 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.413874479s)
	I0318 13:55:03.715519 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:03.732767 1157887 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:03.743375 1157887 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:03.753393 1157887 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:03.753414 1157887 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:03.753457 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0318 13:55:03.763226 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:03.763289 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:03.774001 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0318 13:55:03.783943 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:03.783991 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:03.794580 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.803881 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:03.803921 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:03.813709 1157887 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0318 13:55:03.823096 1157887 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:03.823138 1157887 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:03.832790 1157887 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:03.891459 1157887 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:03.891672 1157887 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:04.056923 1157887 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:04.057055 1157887 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:04.057197 1157887 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:04.312932 1157887 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:04.314955 1157887 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:04.315063 1157887 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:04.315156 1157887 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:04.315286 1157887 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:04.315388 1157887 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:04.315490 1157887 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:04.315568 1157887 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:04.315668 1157887 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:04.315743 1157887 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:04.315844 1157887 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:04.315969 1157887 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:04.316034 1157887 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:04.316108 1157887 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:04.643155 1157887 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:04.927731 1157887 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:05.058875 1157887 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:05.221520 1157887 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:05.221985 1157887 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:05.224297 1157887 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:05.226200 1157887 out.go:204]   - Booting up control plane ...
	I0318 13:55:05.226326 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:05.226425 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:05.226520 1157887 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:05.244878 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:05.245461 1157887 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:05.245531 1157887 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:05.388215 1157887 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:11.393083 1157887 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.004356 seconds
	I0318 13:55:11.393511 1157887 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:11.412586 1157887 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:11.939563 1157887 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:11.939844 1157887 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-569210 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:12.457349 1157887 kubeadm.go:309] [bootstrap-token] Using token: z44dyw.tsw47dmn862zavdi
	I0318 13:55:12.458855 1157887 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:12.459037 1157887 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:12.466850 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:12.482822 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:12.488920 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:12.496947 1157887 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:12.507954 1157887 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:12.535337 1157887 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:12.763814 1157887 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:12.877248 1157887 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:12.878047 1157887 kubeadm.go:309] 
	I0318 13:55:12.878159 1157887 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:12.878183 1157887 kubeadm.go:309] 
	I0318 13:55:12.878291 1157887 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:12.878301 1157887 kubeadm.go:309] 
	I0318 13:55:12.878334 1157887 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:12.878432 1157887 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:12.878519 1157887 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:12.878531 1157887 kubeadm.go:309] 
	I0318 13:55:12.878603 1157887 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:12.878615 1157887 kubeadm.go:309] 
	I0318 13:55:12.878690 1157887 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:12.878703 1157887 kubeadm.go:309] 
	I0318 13:55:12.878762 1157887 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:12.878858 1157887 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:12.878974 1157887 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:12.878985 1157887 kubeadm.go:309] 
	I0318 13:55:12.879087 1157887 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:12.879164 1157887 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:12.879171 1157887 kubeadm.go:309] 
	I0318 13:55:12.879275 1157887 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879410 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:12.879464 1157887 kubeadm.go:309] 	--control-plane 
	I0318 13:55:12.879484 1157887 kubeadm.go:309] 
	I0318 13:55:12.879576 1157887 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:12.879586 1157887 kubeadm.go:309] 
	I0318 13:55:12.879719 1157887 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token z44dyw.tsw47dmn862zavdi \
	I0318 13:55:12.879871 1157887 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:12.883383 1157887 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:12.883432 1157887 cni.go:84] Creating CNI manager for ""
	I0318 13:55:12.883447 1157887 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:12.885248 1157887 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:12.886708 1157887 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:12.929444 1157887 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:13.043416 1157887 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:13.043541 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.043567 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-569210 minikube.k8s.io/updated_at=2024_03_18T13_55_13_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=default-k8s-diff-port-569210 minikube.k8s.io/primary=true
	I0318 13:55:13.064927 1157887 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:13.286093 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:13.786780 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.286728 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:14.786442 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.287103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:15.786443 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.287138 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:16.113672 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:16.113963 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:16.787069 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.286490 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:17.786317 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.286840 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:18.786872 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.286911 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:19.786554 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.286216 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:20.786282 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.286590 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:21.787103 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.286966 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:22.786928 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.286275 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:23.786464 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.286791 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.787028 1157887 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:24.938400 1157887 kubeadm.go:1107] duration metric: took 11.894943444s to wait for elevateKubeSystemPrivileges
	W0318 13:55:24.938440 1157887 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:24.938448 1157887 kubeadm.go:393] duration metric: took 5m12.933246555s to StartCluster
	I0318 13:55:24.938470 1157887 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.938621 1157887 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:24.940984 1157887 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:24.941286 1157887 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.3 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:24.943151 1157887 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:24.941329 1157887 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:24.941469 1157887 config.go:182] Loaded profile config "default-k8s-diff-port-569210": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:24.944770 1157887 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:24.944780 1157887 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944830 1157887 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.944845 1157887 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:24.944846 1157887 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944851 1157887 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-569210"
	I0318 13:55:24.944880 1157887 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:24.944888 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	W0318 13:55:24.944897 1157887 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:24.944927 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.944881 1157887 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-569210"
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945350 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945375 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945400 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.945311 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.945460 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.963173 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42139
	I0318 13:55:24.963820 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.964695 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.964725 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.965120 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.965696 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.965735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.965976 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43645
	I0318 13:55:24.966207 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43495
	I0318 13:55:24.966502 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.966598 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.967058 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967062 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.967083 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967100 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.967467 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967603 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.967671 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.968107 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.968146 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.971673 1157887 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-569210"
	W0318 13:55:24.971696 1157887 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:24.971729 1157887 host.go:66] Checking if "default-k8s-diff-port-569210" exists ...
	I0318 13:55:24.972091 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.972129 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.986041 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0318 13:55:24.986481 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.986989 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.987009 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.987352 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.987605 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
	I0318 13:55:24.987613 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.988061 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.988481 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.988499 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.988904 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.989082 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:24.989785 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.992033 1157887 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:24.990673 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:24.991225 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36687
	I0318 13:55:24.993532 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:24.993557 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:24.993587 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.995449 1157887 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:24.994077 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:24.996749 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997153 1157887 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:24.997171 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:24.997191 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:24.997431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:24.997463 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:24.997466 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:24.997665 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:24.997684 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:24.997746 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:24.998183 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:24.998273 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:24.998497 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:24.998701 1157887 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:24.998735 1157887 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:24.999951 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000431 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.000454 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.000676 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.000865 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.001021 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.001160 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.016442 1157887 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32783
	I0318 13:55:25.016827 1157887 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:25.017300 1157887 main.go:141] libmachine: Using API Version  1
	I0318 13:55:25.017328 1157887 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:25.017686 1157887 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:25.017906 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetState
	I0318 13:55:25.019440 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .DriverName
	I0318 13:55:25.019694 1157887 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.019711 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:25.019731 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHHostname
	I0318 13:55:25.022079 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022370 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4d:48:26", ip: ""} in network mk-default-k8s-diff-port-569210: {Iface:virbr3 ExpiryTime:2024-03-18 14:41:20 +0000 UTC Type:0 Mac:52:54:00:4d:48:26 Iaid: IPaddr:192.168.61.3 Prefix:24 Hostname:default-k8s-diff-port-569210 Clientid:01:52:54:00:4d:48:26}
	I0318 13:55:25.022398 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | domain default-k8s-diff-port-569210 has defined IP address 192.168.61.3 and MAC address 52:54:00:4d:48:26 in network mk-default-k8s-diff-port-569210
	I0318 13:55:25.022497 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHPort
	I0318 13:55:25.022645 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHKeyPath
	I0318 13:55:25.022762 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .GetSSHUsername
	I0318 13:55:25.022937 1157887 sshutil.go:53] new ssh client: &{IP:192.168.61.3 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/default-k8s-diff-port-569210/id_rsa Username:docker}
	I0318 13:55:25.188474 1157887 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:25.208092 1157887 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218757 1157887 node_ready.go:49] node "default-k8s-diff-port-569210" has status "Ready":"True"
	I0318 13:55:25.218789 1157887 node_ready.go:38] duration metric: took 10.658955ms for node "default-k8s-diff-port-569210" to be "Ready" ...
	I0318 13:55:25.218829 1157887 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:25.224381 1157887 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235938 1157887 pod_ready.go:92] pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.235962 1157887 pod_ready.go:81] duration metric: took 11.550686ms for pod "etcd-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.235971 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.242985 1157887 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.243014 1157887 pod_ready.go:81] duration metric: took 7.034818ms for pod "kube-apiserver-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.243027 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255777 1157887 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:25.255801 1157887 pod_ready.go:81] duration metric: took 12.766918ms for pod "kube-controller-manager-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.255811 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:25.301824 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:25.301846 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:25.330301 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:25.348473 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:25.348500 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:25.365746 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:25.398074 1157887 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:25.398099 1157887 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:25.423951 1157887 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:27.292115 1157887 pod_ready.go:92] pod "kube-proxy-2pp8z" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.292202 1157887 pod_ready.go:81] duration metric: took 2.036383518s for pod "kube-proxy-2pp8z" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.292227 1157887 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299705 1157887 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:27.299732 1157887 pod_ready.go:81] duration metric: took 7.486631ms for pod "kube-scheduler-default-k8s-diff-port-569210" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:27.299743 1157887 pod_ready.go:38] duration metric: took 2.08090143s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:27.299762 1157887 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:27.299824 1157887 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:27.706241 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.375885124s)
	I0318 13:55:27.706314 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706326 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706330 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.340547601s)
	I0318 13:55:27.706377 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.706392 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706630 1157887 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.282631636s)
	I0318 13:55:27.706900 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.706828 1157887 api_server.go:72] duration metric: took 2.765497711s to wait for apiserver process to appear ...
	I0318 13:55:27.706940 1157887 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:27.706879 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.706979 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.706996 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707024 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706916 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707088 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.706985 1157887 api_server.go:253] Checking apiserver healthz at https://192.168.61.3:8444/healthz ...
	I0318 13:55:27.707343 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707366 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707372 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.707405 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707417 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707426 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.707455 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.707682 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.707696 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.707706 1157887 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-569210"
	I0318 13:55:27.708614 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.708664 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.708694 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.708783 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.709092 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.709151 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.709175 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.718110 1157887 api_server.go:279] https://192.168.61.3:8444/healthz returned 200:
	ok
	I0318 13:55:27.719497 1157887 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:27.719518 1157887 api_server.go:131] duration metric: took 12.563372ms to wait for apiserver health ...
	I0318 13:55:27.719526 1157887 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:27.739882 1157887 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:27.739914 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) Calling .Close
	I0318 13:55:27.740263 1157887 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:27.740296 1157887 main.go:141] libmachine: (default-k8s-diff-port-569210) DBG | Closing plugin on server side
	I0318 13:55:27.740318 1157887 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:27.742102 1157887 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0318 13:55:27.368024 1157263 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (33.223901258s)
	I0318 13:55:27.368118 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.388474 1157263 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0318 13:55:27.402749 1157263 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:27.417121 1157263 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:27.417184 1157263 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:27.417235 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:27.429920 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:27.429997 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:27.442468 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:27.454842 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:27.454913 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:27.467911 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.480201 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:27.480272 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:27.496430 1157263 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:27.512020 1157263 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:27.512092 1157263 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:27.528102 1157263 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:27.601072 1157263 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0318 13:55:27.601235 1157263 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:27.796445 1157263 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:27.796574 1157263 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:27.796730 1157263 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:28.079026 1157263 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:27.743429 1157887 addons.go:505] duration metric: took 2.802098895s for enable addons: enabled=[metrics-server storage-provisioner default-storageclass]
	I0318 13:55:27.744694 1157887 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:27.744727 1157887 system_pods.go:61] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.744733 1157887 system_pods.go:61] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.744738 1157887 system_pods.go:61] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.744744 1157887 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.744750 1157887 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.744756 1157887 system_pods.go:61] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.744764 1157887 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.744777 1157887 system_pods.go:61] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.744783 1157887 system_pods.go:61] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending
	I0318 13:55:27.744797 1157887 system_pods.go:74] duration metric: took 25.264322ms to wait for pod list to return data ...
	I0318 13:55:27.744810 1157887 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:27.755398 1157887 default_sa.go:45] found service account: "default"
	I0318 13:55:27.755427 1157887 default_sa.go:55] duration metric: took 10.607153ms for default service account to be created ...
	I0318 13:55:27.755439 1157887 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:27.815477 1157887 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:27.815507 1157887 system_pods.go:89] "coredns-5dd5756b68-j5qxm" [164d2cc3-0891-4fcd-81bd-34d7cf0c691c] Running
	I0318 13:55:27.815512 1157887 system_pods.go:89] "coredns-5dd5756b68-xdcht" [bf264558-6c11-44c9-82d6-ea23aea43dc9] Running
	I0318 13:55:27.815517 1157887 system_pods.go:89] "etcd-default-k8s-diff-port-569210" [8d51c0c6-6005-4f76-917c-20f07b73742f] Running
	I0318 13:55:27.815521 1157887 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-569210" [31a8160d-14db-4383-b833-a8bc3f5990ba] Running
	I0318 13:55:27.815526 1157887 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-569210" [173e4d84-8dc2-47fc-9c4d-ed613d180813] Running
	I0318 13:55:27.815529 1157887 system_pods.go:89] "kube-proxy-2pp8z" [912b3f56-3df6-485f-a01a-60801b867b86] Running
	I0318 13:55:27.815533 1157887 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-569210" [1ee4e8f8-3fad-45a8-be35-25a879aaaa7b] Running
	I0318 13:55:27.815540 1157887 system_pods.go:89] "metrics-server-57f55c9bc5-ng9ww" [4c8209dc-b6ba-427d-ba32-0da4993b0902] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:27.815546 1157887 system_pods.go:89] "storage-provisioner" [f0dfdeb1-f567-41df-98c3-7987f0fd7b2b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0318 13:55:27.815557 1157887 system_pods.go:126] duration metric: took 60.111832ms to wait for k8s-apps to be running ...
	I0318 13:55:27.815566 1157887 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:27.815610 1157887 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:27.834266 1157887 system_svc.go:56] duration metric: took 18.687554ms WaitForService to wait for kubelet
	I0318 13:55:27.834304 1157887 kubeadm.go:576] duration metric: took 2.892974502s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:27.834345 1157887 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:28.013031 1157887 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:28.013095 1157887 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:28.013148 1157887 node_conditions.go:105] duration metric: took 178.79502ms to run NodePressure ...
	I0318 13:55:28.013169 1157887 start.go:240] waiting for startup goroutines ...
	I0318 13:55:28.013181 1157887 start.go:245] waiting for cluster config update ...
	I0318 13:55:28.013199 1157887 start.go:254] writing updated cluster config ...
	I0318 13:55:28.013519 1157887 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:28.092810 1157887 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:28.095783 1157887 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-569210" cluster and "default" namespace by default
	I0318 13:55:28.080939 1157263 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:28.081056 1157263 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:28.081145 1157263 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:28.081249 1157263 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:28.082078 1157263 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:28.082860 1157263 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:28.083397 1157263 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:28.084597 1157263 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:28.084941 1157263 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:28.085603 1157263 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:28.086461 1157263 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:28.087265 1157263 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:28.087343 1157263 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:28.348996 1157263 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:28.516513 1157263 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:28.585513 1157263 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:28.817150 1157263 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:28.817900 1157263 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:28.820280 1157263 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:28.822114 1157263 out.go:204]   - Booting up control plane ...
	I0318 13:55:28.822217 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:28.822811 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:28.825310 1157263 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:28.845906 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:28.847013 1157263 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:28.847069 1157263 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:28.992421 1157263 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:55:35.495384 1157263 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502688 seconds
	I0318 13:55:35.495578 1157263 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0318 13:55:35.517088 1157263 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0318 13:55:36.049915 1157263 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0318 13:55:36.050163 1157263 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-173036 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0318 13:55:36.571450 1157263 kubeadm.go:309] [bootstrap-token] Using token: a1fi6l.v36l7wrnalucsepl
	I0318 13:55:36.573263 1157263 out.go:204]   - Configuring RBAC rules ...
	I0318 13:55:36.573448 1157263 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0318 13:55:36.581322 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0318 13:55:36.594853 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0318 13:55:36.598538 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0318 13:55:36.602430 1157263 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0318 13:55:36.605534 1157263 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0318 13:55:36.621332 1157263 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0318 13:55:36.865518 1157263 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0318 13:55:36.990015 1157263 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0318 13:55:36.991079 1157263 kubeadm.go:309] 
	I0318 13:55:36.991168 1157263 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0318 13:55:36.991181 1157263 kubeadm.go:309] 
	I0318 13:55:36.991288 1157263 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0318 13:55:36.991299 1157263 kubeadm.go:309] 
	I0318 13:55:36.991320 1157263 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0318 13:55:36.991395 1157263 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0318 13:55:36.991475 1157263 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0318 13:55:36.991494 1157263 kubeadm.go:309] 
	I0318 13:55:36.991572 1157263 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0318 13:55:36.991581 1157263 kubeadm.go:309] 
	I0318 13:55:36.991646 1157263 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0318 13:55:36.991658 1157263 kubeadm.go:309] 
	I0318 13:55:36.991737 1157263 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0318 13:55:36.991839 1157263 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0318 13:55:36.991954 1157263 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0318 13:55:36.991966 1157263 kubeadm.go:309] 
	I0318 13:55:36.992073 1157263 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0318 13:55:36.992174 1157263 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0318 13:55:36.992186 1157263 kubeadm.go:309] 
	I0318 13:55:36.992304 1157263 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992477 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf \
	I0318 13:55:36.992522 1157263 kubeadm.go:309] 	--control-plane 
	I0318 13:55:36.992532 1157263 kubeadm.go:309] 
	I0318 13:55:36.992642 1157263 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0318 13:55:36.992656 1157263 kubeadm.go:309] 
	I0318 13:55:36.992769 1157263 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token a1fi6l.v36l7wrnalucsepl \
	I0318 13:55:36.992922 1157263 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:a1344f0f3711f58fc1cd7b9626e9cee7b8515c38b8838e5f1a0d7979c7202ddf 
	I0318 13:55:36.994542 1157263 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:36.994648 1157263 cni.go:84] Creating CNI manager for ""
	I0318 13:55:36.994660 1157263 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 13:55:36.996526 1157263 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0318 13:55:36.997929 1157263 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0318 13:55:37.047757 1157263 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0318 13:55:37.075078 1157263 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0318 13:55:37.075167 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.075199 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-173036 minikube.k8s.io/updated_at=2024_03_18T13_55_37_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=16bdcbec856cf730004e5bed78d1b7625f13388a minikube.k8s.io/name=embed-certs-173036 minikube.k8s.io/primary=true
	I0318 13:55:37.236857 1157263 ops.go:34] apiserver oom_adj: -16
	I0318 13:55:37.422453 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:37.922622 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.423527 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:38.922743 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.422721 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:39.923438 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.422599 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:40.923170 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.422812 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:41.922526 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.422594 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:42.922835 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.423479 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:43.923114 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.422672 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:44.922883 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.422863 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:45.922770 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.423473 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:46.923125 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.423378 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:47.923366 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.422566 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:48.923231 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.422505 1157263 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0318 13:55:49.554542 1157263 kubeadm.go:1107] duration metric: took 12.479441091s to wait for elevateKubeSystemPrivileges
	W0318 13:55:49.554590 1157263 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0318 13:55:49.554602 1157263 kubeadm.go:393] duration metric: took 5m13.226983757s to StartCluster
	I0318 13:55:49.554626 1157263 settings.go:142] acquiring lock: {Name:mk2d6b94ee5fa5f1dbbb15ba1d5560c3c0f78110 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.554778 1157263 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 13:55:49.556962 1157263 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/kubeconfig: {Name:mk9c139f2702214315ee08dd7c5d02f739047458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 13:55:49.557273 1157263 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.191 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0318 13:55:49.558774 1157263 out.go:177] * Verifying Kubernetes components...
	I0318 13:55:49.557321 1157263 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0318 13:55:49.557488 1157263 config.go:182] Loaded profile config "embed-certs-173036": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:55:49.560195 1157263 addons.go:69] Setting default-storageclass=true in profile "embed-certs-173036"
	I0318 13:55:49.560201 1157263 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0318 13:55:49.560211 1157263 addons.go:69] Setting metrics-server=true in profile "embed-certs-173036"
	I0318 13:55:49.560237 1157263 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-173036"
	I0318 13:55:49.560247 1157263 addons.go:234] Setting addon metrics-server=true in "embed-certs-173036"
	W0318 13:55:49.560254 1157263 addons.go:243] addon metrics-server should already be in state true
	I0318 13:55:49.560201 1157263 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-173036"
	I0318 13:55:49.560282 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560302 1157263 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-173036"
	W0318 13:55:49.560317 1157263 addons.go:243] addon storage-provisioner should already be in state true
	I0318 13:55:49.560388 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.560644 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560676 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560678 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560716 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.560777 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.560803 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.577682 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32889
	I0318 13:55:49.577714 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38841
	I0318 13:55:49.578101 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46617
	I0318 13:55:49.578261 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578285 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578493 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.578880 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578907 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.578882 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.578923 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579013 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.579036 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.579302 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579333 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579538 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.579598 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.579914 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.579955 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.580203 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.580238 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.583587 1157263 addons.go:234] Setting addon default-storageclass=true in "embed-certs-173036"
	W0318 13:55:49.583610 1157263 addons.go:243] addon default-storageclass should already be in state true
	I0318 13:55:49.583641 1157263 host.go:66] Checking if "embed-certs-173036" exists ...
	I0318 13:55:49.584009 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.584040 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.596862 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46015
	I0318 13:55:49.597356 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.597859 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.598026 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.598110 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38169
	I0318 13:55:49.598635 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599310 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.599331 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.599405 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36747
	I0318 13:55:49.599732 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.599874 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.600120 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.600135 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.600197 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.600439 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.601019 1157263 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:55:49.601052 1157263 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:55:49.602172 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.604115 1157263 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0318 13:55:49.606034 1157263 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.606049 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0318 13:55:49.606065 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.603277 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.606323 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.608600 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.610213 1157263 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0318 13:55:49.611511 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0318 13:55:49.611531 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0318 13:55:49.611545 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.609758 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.611598 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.611613 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.610550 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.611727 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.611868 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.611991 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.614689 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615105 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.615322 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.615403 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.615531 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.615672 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.615773 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.620257 1157263 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41729
	I0318 13:55:49.620653 1157263 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:55:49.621225 1157263 main.go:141] libmachine: Using API Version  1
	I0318 13:55:49.621243 1157263 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:55:49.621610 1157263 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:55:49.621790 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetState
	I0318 13:55:49.623303 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .DriverName
	I0318 13:55:49.623566 1157263 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:49.623580 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0318 13:55:49.623594 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHHostname
	I0318 13:55:49.626325 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.626733 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:4f:b1", ip: ""} in network mk-embed-certs-173036: {Iface:virbr2 ExpiryTime:2024-03-18 14:50:17 +0000 UTC Type:0 Mac:52:54:00:e1:4f:b1 Iaid: IPaddr:192.168.50.191 Prefix:24 Hostname:embed-certs-173036 Clientid:01:52:54:00:e1:4f:b1}
	I0318 13:55:49.626755 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | domain embed-certs-173036 has defined IP address 192.168.50.191 and MAC address 52:54:00:e1:4f:b1 in network mk-embed-certs-173036
	I0318 13:55:49.627028 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHPort
	I0318 13:55:49.627196 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHKeyPath
	I0318 13:55:49.627335 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .GetSSHUsername
	I0318 13:55:49.627441 1157263 sshutil.go:53] new ssh client: &{IP:192.168.50.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/embed-certs-173036/id_rsa Username:docker}
	I0318 13:55:49.791524 1157263 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0318 13:55:49.847829 1157263 node_ready.go:35] waiting up to 6m0s for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860595 1157263 node_ready.go:49] node "embed-certs-173036" has status "Ready":"True"
	I0318 13:55:49.860621 1157263 node_ready.go:38] duration metric: took 12.757412ms for node "embed-certs-173036" to be "Ready" ...
	I0318 13:55:49.860631 1157263 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:49.870524 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:49.917170 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0318 13:55:49.917197 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0318 13:55:49.965845 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0318 13:55:49.965871 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0318 13:55:49.969600 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0318 13:55:49.982887 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0318 13:55:50.023768 1157263 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:50.023795 1157263 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0318 13:55:50.139120 1157263 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0318 13:55:51.877589 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-ft594" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:51.877618 1157263 pod_ready.go:81] duration metric: took 2.007066644s for pod "coredns-5dd5756b68-ft594" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:51.877634 1157263 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.007908 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.02498147s)
	I0318 13:55:52.007966 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.007979 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008318 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008378 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008383 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.008408 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.008427 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.008713 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.008827 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.008853 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.009491 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.039858476s)
	I0318 13:55:52.009567 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.009595 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010239 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010242 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010276 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.010289 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.010301 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.010553 1157263 main.go:141] libmachine: (embed-certs-173036) DBG | Closing plugin on server side
	I0318 13:55:52.010568 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.010578 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.026035 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.026056 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.026364 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.026385 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.202596 1157263 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.063427726s)
	I0318 13:55:52.202663 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.202686 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.202999 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203021 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203032 1157263 main.go:141] libmachine: Making call to close driver server
	I0318 13:55:52.203040 1157263 main.go:141] libmachine: (embed-certs-173036) Calling .Close
	I0318 13:55:52.203321 1157263 main.go:141] libmachine: Successfully made call to close driver server
	I0318 13:55:52.203338 1157263 main.go:141] libmachine: Making call to close connection to plugin binary
	I0318 13:55:52.203352 1157263 addons.go:470] Verifying addon metrics-server=true in "embed-certs-173036"
	I0318 13:55:52.205372 1157263 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0318 13:55:52.207184 1157263 addons.go:505] duration metric: took 2.649872416s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0318 13:55:52.391839 1157263 pod_ready.go:92] pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.391878 1157263 pod_ready.go:81] duration metric: took 514.235543ms for pod "coredns-5dd5756b68-p6dw8" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.391891 1157263 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398044 1157263 pod_ready.go:92] pod "etcd-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.398075 1157263 pod_ready.go:81] duration metric: took 6.176672ms for pod "etcd-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.398091 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403790 1157263 pod_ready.go:92] pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.403809 1157263 pod_ready.go:81] duration metric: took 5.70927ms for pod "kube-apiserver-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.403817 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414956 1157263 pod_ready.go:92] pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.414976 1157263 pod_ready.go:81] duration metric: took 11.153442ms for pod "kube-controller-manager-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.414986 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674125 1157263 pod_ready.go:92] pod "kube-proxy-lp9mc" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:52.674151 1157263 pod_ready.go:81] duration metric: took 259.158776ms for pod "kube-proxy-lp9mc" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:52.674160 1157263 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075385 1157263 pod_ready.go:92] pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace has status "Ready":"True"
	I0318 13:55:53.075420 1157263 pod_ready.go:81] duration metric: took 401.251175ms for pod "kube-scheduler-embed-certs-173036" in "kube-system" namespace to be "Ready" ...
	I0318 13:55:53.075432 1157263 pod_ready.go:38] duration metric: took 3.214790175s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0318 13:55:53.075452 1157263 api_server.go:52] waiting for apiserver process to appear ...
	I0318 13:55:53.075523 1157263 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:55:53.092916 1157263 api_server.go:72] duration metric: took 3.53560403s to wait for apiserver process to appear ...
	I0318 13:55:53.092948 1157263 api_server.go:88] waiting for apiserver healthz status ...
	I0318 13:55:53.093027 1157263 api_server.go:253] Checking apiserver healthz at https://192.168.50.191:8443/healthz ...
	I0318 13:55:53.098715 1157263 api_server.go:279] https://192.168.50.191:8443/healthz returned 200:
	ok
	I0318 13:55:53.100073 1157263 api_server.go:141] control plane version: v1.28.4
	I0318 13:55:53.100102 1157263 api_server.go:131] duration metric: took 7.134408ms to wait for apiserver health ...
	I0318 13:55:53.100113 1157263 system_pods.go:43] waiting for kube-system pods to appear ...
	I0318 13:55:53.278961 1157263 system_pods.go:59] 9 kube-system pods found
	I0318 13:55:53.278993 1157263 system_pods.go:61] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.278998 1157263 system_pods.go:61] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.279002 1157263 system_pods.go:61] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.279005 1157263 system_pods.go:61] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.279010 1157263 system_pods.go:61] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.279013 1157263 system_pods.go:61] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.279017 1157263 system_pods.go:61] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.279023 1157263 system_pods.go:61] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.279026 1157263 system_pods.go:61] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.279037 1157263 system_pods.go:74] duration metric: took 178.915393ms to wait for pod list to return data ...
	I0318 13:55:53.279047 1157263 default_sa.go:34] waiting for default service account to be created ...
	I0318 13:55:53.475094 1157263 default_sa.go:45] found service account: "default"
	I0318 13:55:53.475123 1157263 default_sa.go:55] duration metric: took 196.069593ms for default service account to be created ...
	I0318 13:55:53.475133 1157263 system_pods.go:116] waiting for k8s-apps to be running ...
	I0318 13:55:53.678384 1157263 system_pods.go:86] 9 kube-system pods found
	I0318 13:55:53.678413 1157263 system_pods.go:89] "coredns-5dd5756b68-ft594" [46e6863a-0b5e-434e-b13c-d33e9ed15007] Running
	I0318 13:55:53.678418 1157263 system_pods.go:89] "coredns-5dd5756b68-p6dw8" [c03d9bbe-1493-44a4-be19-1e387ff6eaef] Running
	I0318 13:55:53.678422 1157263 system_pods.go:89] "etcd-embed-certs-173036" [0351a0a6-7bf0-49b7-b767-b1009ea8f8b3] Running
	I0318 13:55:53.678427 1157263 system_pods.go:89] "kube-apiserver-embed-certs-173036" [d045c63b-ff93-4ebc-a727-486fbad1d1b6] Running
	I0318 13:55:53.678431 1157263 system_pods.go:89] "kube-controller-manager-embed-certs-173036" [77925f6c-f839-44ce-8438-0b2ff22eb538] Running
	I0318 13:55:53.678436 1157263 system_pods.go:89] "kube-proxy-lp9mc" [4d2d1ef6-fb3b-4910-9e70-401dfa0c47e0] Running
	I0318 13:55:53.678439 1157263 system_pods.go:89] "kube-scheduler-embed-certs-173036" [a63fa49c-e09a-43ef-b0a2-f778c256c0ab] Running
	I0318 13:55:53.678447 1157263 system_pods.go:89] "metrics-server-57f55c9bc5-vzv79" [1fc71314-b3e7-4113-b254-557ec39eef43] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0318 13:55:53.678455 1157263 system_pods.go:89] "storage-provisioner" [a37883b5-9db5-467e-9b91-40f6ea69c18e] Running
	I0318 13:55:53.678464 1157263 system_pods.go:126] duration metric: took 203.32588ms to wait for k8s-apps to be running ...
	I0318 13:55:53.678473 1157263 system_svc.go:44] waiting for kubelet service to be running ....
	I0318 13:55:53.678531 1157263 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:53.698244 1157263 system_svc.go:56] duration metric: took 19.758793ms WaitForService to wait for kubelet
	I0318 13:55:53.698279 1157263 kubeadm.go:576] duration metric: took 4.140974066s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0318 13:55:53.698307 1157263 node_conditions.go:102] verifying NodePressure condition ...
	I0318 13:55:53.876137 1157263 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0318 13:55:53.876162 1157263 node_conditions.go:123] node cpu capacity is 2
	I0318 13:55:53.876173 1157263 node_conditions.go:105] duration metric: took 177.861272ms to run NodePressure ...
	I0318 13:55:53.876184 1157263 start.go:240] waiting for startup goroutines ...
	I0318 13:55:53.876191 1157263 start.go:245] waiting for cluster config update ...
	I0318 13:55:53.876202 1157263 start.go:254] writing updated cluster config ...
	I0318 13:55:53.876907 1157263 ssh_runner.go:195] Run: rm -f paused
	I0318 13:55:53.931596 1157263 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0318 13:55:53.933499 1157263 out.go:177] * Done! kubectl is now configured to use "embed-certs-173036" cluster and "default" namespace by default
	I0318 13:55:56.115397 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:55:56.115674 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:55:56.115714 1157708 kubeadm.go:309] 
	I0318 13:55:56.115782 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:55:56.115840 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:55:56.115849 1157708 kubeadm.go:309] 
	I0318 13:55:56.115908 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:55:56.115979 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:55:56.116102 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:55:56.116112 1157708 kubeadm.go:309] 
	I0318 13:55:56.116242 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:55:56.116289 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:55:56.116349 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:55:56.116370 1157708 kubeadm.go:309] 
	I0318 13:55:56.116506 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:55:56.116645 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:55:56.116665 1157708 kubeadm.go:309] 
	I0318 13:55:56.116804 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:55:56.116897 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:55:56.117005 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:55:56.117094 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:55:56.117110 1157708 kubeadm.go:309] 
	I0318 13:55:56.117680 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:55:56.117813 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:55:56.117934 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0318 13:55:56.118052 1157708 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0318 13:55:56.118124 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0318 13:55:57.920938 1157708 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.802776126s)
	I0318 13:55:57.921031 1157708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:55:57.939226 1157708 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0318 13:55:57.952304 1157708 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0318 13:55:57.952342 1157708 kubeadm.go:156] found existing configuration files:
	
	I0318 13:55:57.952404 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0318 13:55:57.964632 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0318 13:55:57.964695 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0318 13:55:57.977306 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0318 13:55:57.989728 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0318 13:55:57.989790 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0318 13:55:58.001661 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.013078 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0318 13:55:58.013160 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0318 13:55:58.024891 1157708 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0318 13:55:58.036171 1157708 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0318 13:55:58.036225 1157708 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0318 13:55:58.048156 1157708 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0318 13:55:58.128356 1157708 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0318 13:55:58.128445 1157708 kubeadm.go:309] [preflight] Running pre-flight checks
	I0318 13:55:58.297704 1157708 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0318 13:55:58.297897 1157708 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0318 13:55:58.298048 1157708 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0318 13:55:58.515521 1157708 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0318 13:55:58.517569 1157708 out.go:204]   - Generating certificates and keys ...
	I0318 13:55:58.517679 1157708 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0318 13:55:58.517760 1157708 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0318 13:55:58.517830 1157708 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0318 13:55:58.517908 1157708 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0318 13:55:58.517980 1157708 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0318 13:55:58.518047 1157708 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0318 13:55:58.518280 1157708 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0318 13:55:58.519078 1157708 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0318 13:55:58.520081 1157708 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0318 13:55:58.521268 1157708 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0318 13:55:58.521861 1157708 kubeadm.go:309] [certs] Using the existing "sa" key
	I0318 13:55:58.521936 1157708 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0318 13:55:58.762418 1157708 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0318 13:55:58.999746 1157708 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0318 13:55:59.214448 1157708 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0318 13:55:59.402662 1157708 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0318 13:55:59.421555 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0318 13:55:59.423151 1157708 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0318 13:55:59.423233 1157708 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0318 13:55:59.560412 1157708 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0318 13:55:59.563125 1157708 out.go:204]   - Booting up control plane ...
	I0318 13:55:59.563274 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0318 13:55:59.571364 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0318 13:55:59.572936 1157708 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0318 13:55:59.573987 1157708 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0318 13:55:59.586689 1157708 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0318 13:56:39.588627 1157708 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0318 13:56:39.588942 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:39.589128 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:44.589564 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:44.589852 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:56:54.590311 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:56:54.590619 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:14.591571 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:14.591866 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594170 1157708 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0318 13:57:54.594433 1157708 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0318 13:57:54.594448 1157708 kubeadm.go:309] 
	I0318 13:57:54.594490 1157708 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0318 13:57:54.594540 1157708 kubeadm.go:309] 		timed out waiting for the condition
	I0318 13:57:54.594549 1157708 kubeadm.go:309] 
	I0318 13:57:54.594594 1157708 kubeadm.go:309] 	This error is likely caused by:
	I0318 13:57:54.594641 1157708 kubeadm.go:309] 		- The kubelet is not running
	I0318 13:57:54.594800 1157708 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0318 13:57:54.594811 1157708 kubeadm.go:309] 
	I0318 13:57:54.594950 1157708 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0318 13:57:54.595000 1157708 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0318 13:57:54.595046 1157708 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0318 13:57:54.595056 1157708 kubeadm.go:309] 
	I0318 13:57:54.595163 1157708 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0318 13:57:54.595297 1157708 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0318 13:57:54.595312 1157708 kubeadm.go:309] 
	I0318 13:57:54.595471 1157708 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0318 13:57:54.595605 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0318 13:57:54.595716 1157708 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0318 13:57:54.595812 1157708 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0318 13:57:54.595827 1157708 kubeadm.go:309] 
	I0318 13:57:54.596636 1157708 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0318 13:57:54.596805 1157708 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0318 13:57:54.596972 1157708 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0318 13:57:54.597014 1157708 kubeadm.go:393] duration metric: took 8m1.551231902s to StartCluster
	I0318 13:57:54.597076 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0318 13:57:54.597174 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0318 13:57:54.649451 1157708 cri.go:89] found id: ""
	I0318 13:57:54.649484 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.649496 1157708 logs.go:278] No container was found matching "kube-apiserver"
	I0318 13:57:54.649506 1157708 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0318 13:57:54.649577 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0318 13:57:54.692278 1157708 cri.go:89] found id: ""
	I0318 13:57:54.692317 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.692339 1157708 logs.go:278] No container was found matching "etcd"
	I0318 13:57:54.692349 1157708 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0318 13:57:54.692427 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0318 13:57:54.731034 1157708 cri.go:89] found id: ""
	I0318 13:57:54.731062 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.731071 1157708 logs.go:278] No container was found matching "coredns"
	I0318 13:57:54.731077 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0318 13:57:54.731135 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0318 13:57:54.769883 1157708 cri.go:89] found id: ""
	I0318 13:57:54.769913 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.769923 1157708 logs.go:278] No container was found matching "kube-scheduler"
	I0318 13:57:54.769931 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0318 13:57:54.769996 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0318 13:57:54.808620 1157708 cri.go:89] found id: ""
	I0318 13:57:54.808648 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.808656 1157708 logs.go:278] No container was found matching "kube-proxy"
	I0318 13:57:54.808661 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0318 13:57:54.808715 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0318 13:57:54.849207 1157708 cri.go:89] found id: ""
	I0318 13:57:54.849245 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.849256 1157708 logs.go:278] No container was found matching "kube-controller-manager"
	I0318 13:57:54.849264 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0318 13:57:54.849334 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0318 13:57:54.918479 1157708 cri.go:89] found id: ""
	I0318 13:57:54.918508 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.918520 1157708 logs.go:278] No container was found matching "kindnet"
	I0318 13:57:54.918528 1157708 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0318 13:57:54.918597 1157708 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0318 13:57:54.958828 1157708 cri.go:89] found id: ""
	I0318 13:57:54.958861 1157708 logs.go:276] 0 containers: []
	W0318 13:57:54.958871 1157708 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0318 13:57:54.958887 1157708 logs.go:123] Gathering logs for CRI-O ...
	I0318 13:57:54.958906 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0318 13:57:55.078045 1157708 logs.go:123] Gathering logs for container status ...
	I0318 13:57:55.078092 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0318 13:57:55.123043 1157708 logs.go:123] Gathering logs for kubelet ...
	I0318 13:57:55.123077 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0318 13:57:55.180480 1157708 logs.go:123] Gathering logs for dmesg ...
	I0318 13:57:55.180518 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0318 13:57:55.197264 1157708 logs.go:123] Gathering logs for describe nodes ...
	I0318 13:57:55.197316 1157708 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0318 13:57:55.291264 1157708 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0318 13:57:55.291325 1157708 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0318 13:57:55.291395 1157708 out.go:239] * 
	W0318 13:57:55.291477 1157708 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.291502 1157708 out.go:239] * 
	W0318 13:57:55.292511 1157708 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0318 13:57:55.295566 1157708 out.go:177] 
	W0318 13:57:55.296840 1157708 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0318 13:57:55.296903 1157708 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0318 13:57:55.296941 1157708 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0318 13:57:55.298417 1157708 out.go:177] 
	
	
	==> CRI-O <==
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.667600942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770925667573565,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae77dbf5-17e7-4ad8-99a8-e25cb3fae0f0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.668629492Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87cf922e-073f-4acc-b80c-9ef3eda9c506 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.668729502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87cf922e-073f-4acc-b80c-9ef3eda9c506 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.668772107Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=87cf922e-073f-4acc-b80c-9ef3eda9c506 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.704305174Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=da251026-6753-4a00-8160-123a509fe029 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.704452178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=da251026-6753-4a00-8160-123a509fe029 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.705983201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=43e868e4-46a3-4adc-9fc1-cf46bdd6901f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.706457135Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770925706431916,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=43e868e4-46a3-4adc-9fc1-cf46bdd6901f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.707131349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=40902638-91de-4794-bb19-c4ea07bc1990 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.707209667Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=40902638-91de-4794-bb19-c4ea07bc1990 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.707243276Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=40902638-91de-4794-bb19-c4ea07bc1990 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.744399448Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f8934bd7-102d-4967-b4ab-85c9a17f931b name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.744490831Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f8934bd7-102d-4967-b4ab-85c9a17f931b name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.746189181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7bf6dcec-6796-4254-ac1a-d0e4ffa3bd24 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.746576913Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770925746546729,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7bf6dcec-6796-4254-ac1a-d0e4ffa3bd24 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.747191282Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=25e4d07b-141b-4e74-9c98-63d5be0c6e86 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.747288562Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=25e4d07b-141b-4e74-9c98-63d5be0c6e86 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.747324862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=25e4d07b-141b-4e74-9c98-63d5be0c6e86 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.787668814Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2a91a65c-afca-4137-bf8f-cb2085f30556 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.787740935Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2a91a65c-afca-4137-bf8f-cb2085f30556 name=/runtime.v1.RuntimeService/Version
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.789002014Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db886705-8e9e-4d13-b285-3c663daead6b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.789389978Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710770925789369703,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db886705-8e9e-4d13-b285-3c663daead6b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.789809107Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=471cbfc6-9347-4fe7-a9a7-b59e1792d793 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.789969036Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=471cbfc6-9347-4fe7-a9a7-b59e1792d793 name=/runtime.v1.RuntimeService/ListContainers
	Mar 18 14:08:45 old-k8s-version-909137 crio[647]: time="2024-03-18 14:08:45.790018562Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=471cbfc6-9347-4fe7-a9a7-b59e1792d793 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar18 13:49] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.052261] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043383] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.666130] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.485262] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.465886] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.163261] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.162544] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.204190] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.135186] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.316905] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.427040] systemd-fstab-generator[835]: Ignoring "noauto" option for root device
	[  +0.071901] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.095612] systemd-fstab-generator[962]: Ignoring "noauto" option for root device
	[Mar18 13:50] kauditd_printk_skb: 46 callbacks suppressed
	[Mar18 13:54] systemd-fstab-generator[4988]: Ignoring "noauto" option for root device
	[Mar18 13:55] systemd-fstab-generator[5270]: Ignoring "noauto" option for root device
	[  +0.062731] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 14:08:45 up 19 min,  0 users,  load average: 0.00, 0.02, 0.06
	Linux old-k8s-version-909137 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]: goroutine 143 [runnable]:
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0001b8c40)
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1242
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]: goroutine 144 [select]:
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc00057b3b0, 0xc0001c0301, 0xc000724380, 0xc000921e60, 0xc0008de080, 0xc0008de040)
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0001c0300, 0x0, 0x0)
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0001b8c40)
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Mar 18 14:08:42 old-k8s-version-909137 kubelet[6709]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Mar 18 14:08:42 old-k8s-version-909137 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 18 14:08:42 old-k8s-version-909137 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 18 14:08:43 old-k8s-version-909137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 132.
	Mar 18 14:08:43 old-k8s-version-909137 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 18 14:08:43 old-k8s-version-909137 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 18 14:08:43 old-k8s-version-909137 kubelet[6718]: I0318 14:08:43.355849    6718 server.go:416] Version: v1.20.0
	Mar 18 14:08:43 old-k8s-version-909137 kubelet[6718]: I0318 14:08:43.356245    6718 server.go:837] Client rotation is on, will bootstrap in background
	Mar 18 14:08:43 old-k8s-version-909137 kubelet[6718]: I0318 14:08:43.358792    6718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 18 14:08:43 old-k8s-version-909137 kubelet[6718]: I0318 14:08:43.360022    6718 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 18 14:08:43 old-k8s-version-909137 kubelet[6718]: W0318 14:08:43.360042    6718 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-909137 -n old-k8s-version-909137
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 2 (248.846044ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-909137" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (105.08s)

                                                
                                    

Test pass (203/271)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 28.82
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 15.08
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.15
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 45.95
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.15
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.58
31 TestOffline 90.31
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 155.61
38 TestAddons/parallel/Registry 17.39
40 TestAddons/parallel/InspektorGadget 16.06
41 TestAddons/parallel/MetricsServer 6.86
42 TestAddons/parallel/HelmTiller 13.8
44 TestAddons/parallel/CSI 75.46
45 TestAddons/parallel/Headlamp 15.4
46 TestAddons/parallel/CloudSpanner 6.12
47 TestAddons/parallel/LocalPath 57.87
48 TestAddons/parallel/NvidiaDevicePlugin 5.65
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 55.37
55 TestCertExpiration 308.25
57 TestForceSystemdFlag 77.73
58 TestForceSystemdEnv 60.66
60 TestKVMDriverInstallOrUpdate 4.62
64 TestErrorSpam/setup 46.37
65 TestErrorSpam/start 0.39
66 TestErrorSpam/status 0.77
67 TestErrorSpam/pause 1.69
68 TestErrorSpam/unpause 1.75
69 TestErrorSpam/stop 5.23
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 97.54
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 383
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.17
81 TestFunctional/serial/CacheCmd/cache/add_local 2.7
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 396.36
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.25
92 TestFunctional/serial/LogsFileCmd 1.29
93 TestFunctional/serial/InvalidService 4.63
95 TestFunctional/parallel/ConfigCmd 0.45
97 TestFunctional/parallel/DryRun 0.31
98 TestFunctional/parallel/InternationalLanguage 0.15
99 TestFunctional/parallel/StatusCmd 0.89
103 TestFunctional/parallel/ServiceCmdConnect 26.6
104 TestFunctional/parallel/AddonsCmd 0.19
105 TestFunctional/parallel/PersistentVolumeClaim 41.14
107 TestFunctional/parallel/SSHCmd 0.45
108 TestFunctional/parallel/CpCmd 1.48
109 TestFunctional/parallel/MySQL 25.66
110 TestFunctional/parallel/FileSync 0.22
111 TestFunctional/parallel/CertSync 1.48
115 TestFunctional/parallel/NodeLabels 0.08
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
119 TestFunctional/parallel/License 0.69
120 TestFunctional/parallel/Version/short 0.19
121 TestFunctional/parallel/Version/components 0.91
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.31
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
126 TestFunctional/parallel/ImageCommands/ImageBuild 3.81
127 TestFunctional/parallel/ImageCommands/Setup 2.09
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
140 TestFunctional/parallel/ProfileCmd/profile_list 0.37
143 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
144 TestFunctional/parallel/ServiceCmd/DeployApp 25.22
145 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.73
146 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.72
147 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.56
148 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
149 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.46
150 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.04
151 TestFunctional/parallel/ServiceCmd/List 0.46
152 TestFunctional/parallel/MountCmd/any-port 10.05
153 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
154 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
155 TestFunctional/parallel/ServiceCmd/Format 0.33
156 TestFunctional/parallel/ServiceCmd/URL 0.44
157 TestFunctional/parallel/MountCmd/specific-port 2.15
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.54
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.01
165 TestMultiControlPlane/serial/StartCluster 241.86
166 TestMultiControlPlane/serial/DeployApp 7.43
167 TestMultiControlPlane/serial/PingHostFromPods 1.52
168 TestMultiControlPlane/serial/AddWorkerNode 49.24
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.58
171 TestMultiControlPlane/serial/CopyFile 13.85
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.51
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.44
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.53
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.4
180 TestMultiControlPlane/serial/RestartCluster 326.08
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.41
182 TestMultiControlPlane/serial/AddSecondaryNode 76.16
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
187 TestJSONOutput/start/Command 58.94
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.81
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.69
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.44
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.22
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 93.07
219 TestMountStart/serial/StartWithMountFirst 29.74
220 TestMountStart/serial/VerifyMountFirst 0.41
221 TestMountStart/serial/StartWithMountSecond 32.17
222 TestMountStart/serial/VerifyMountSecond 0.4
223 TestMountStart/serial/DeleteFirst 0.69
224 TestMountStart/serial/VerifyMountPostDelete 0.4
225 TestMountStart/serial/Stop 1.43
226 TestMountStart/serial/RestartStopped 24.26
227 TestMountStart/serial/VerifyMountPostStop 0.41
230 TestMultiNode/serial/FreshStart2Nodes 110.45
231 TestMultiNode/serial/DeployApp2Nodes 6.06
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 46.2
234 TestMultiNode/serial/MultiNodeLabels 0.07
235 TestMultiNode/serial/ProfileList 0.24
236 TestMultiNode/serial/CopyFile 7.89
237 TestMultiNode/serial/StopNode 3.18
238 TestMultiNode/serial/StartAfterStop 34.37
240 TestMultiNode/serial/DeleteNode 2.44
242 TestMultiNode/serial/RestartMultiNode 172.11
243 TestMultiNode/serial/ValidateNameConflict 47.51
250 TestScheduledStopUnix 117.4
254 TestRunningBinaryUpgrade 200.2
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 100.33
261 TestStoppedBinaryUpgrade/Setup 2.86
262 TestStoppedBinaryUpgrade/Upgrade 119.83
263 TestNoKubernetes/serial/StartWithStopK8s 39.99
264 TestNoKubernetes/serial/Start 29.4
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
266 TestNoKubernetes/serial/ProfileList 15.67
267 TestNoKubernetes/serial/Stop 1.53
268 TestNoKubernetes/serial/StartNoArgs 23.94
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
280 TestStoppedBinaryUpgrade/MinikubeLogs 1.17
290 TestPause/serial/Start 119.55
295 TestStartStop/group/no-preload/serial/FirstStart 156
297 TestStartStop/group/embed-certs/serial/FirstStart 62.34
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 113.65
300 TestStartStop/group/embed-certs/serial/DeployApp 9.33
301 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.32
303 TestStartStop/group/no-preload/serial/DeployApp 9.31
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.11
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.28
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.09
312 TestStartStop/group/embed-certs/serial/SecondStart 690.48
314 TestStartStop/group/no-preload/serial/SecondStart 621.75
315 TestStartStop/group/old-k8s-version/serial/Stop 6.31
316 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
319 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 586.8
329 TestStartStop/group/newest-cni/serial/FirstStart 58.2
331 TestStartStop/group/newest-cni/serial/DeployApp 0
332 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.24
333 TestStartStop/group/newest-cni/serial/Stop 12.4
334 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.77
335 TestStartStop/group/newest-cni/serial/SecondStart 39.53
336 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
337 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
338 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
339 TestStartStop/group/newest-cni/serial/Pause 3.06
x
+
TestDownloadOnly/v1.20.0/json-events (28.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-059065 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-059065 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (28.823778787s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (28.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-059065
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-059065: exit status 85 (78.127063ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-059065 | jenkins | v1.32.0 | 18 Mar 24 12:15 UTC |          |
	|         | -p download-only-059065        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:15:22
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:15:22.718184 1114148 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:15:22.718300 1114148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:15:22.718308 1114148 out.go:304] Setting ErrFile to fd 2...
	I0318 12:15:22.718312 1114148 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:15:22.718508 1114148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	W0318 12:15:22.718661 1114148 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18429-1106816/.minikube/config/config.json: open /home/jenkins/minikube-integration/18429-1106816/.minikube/config/config.json: no such file or directory
	I0318 12:15:22.719242 1114148 out.go:298] Setting JSON to true
	I0318 12:15:22.720367 1114148 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14270,"bootTime":1710749853,"procs":356,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:15:22.720441 1114148 start.go:139] virtualization: kvm guest
	I0318 12:15:22.723669 1114148 out.go:97] [download-only-059065] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:15:22.725360 1114148 out.go:169] MINIKUBE_LOCATION=18429
	W0318 12:15:22.723809 1114148 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball: no such file or directory
	I0318 12:15:22.723905 1114148 notify.go:220] Checking for updates...
	I0318 12:15:22.728029 1114148 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:15:22.729522 1114148 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:15:22.730771 1114148 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:15:22.732120 1114148 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 12:15:22.734578 1114148 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 12:15:22.734831 1114148 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:15:22.766509 1114148 out.go:97] Using the kvm2 driver based on user configuration
	I0318 12:15:22.766535 1114148 start.go:297] selected driver: kvm2
	I0318 12:15:22.766549 1114148 start.go:901] validating driver "kvm2" against <nil>
	I0318 12:15:22.766878 1114148 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:15:22.766983 1114148 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:15:22.781946 1114148 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:15:22.781995 1114148 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:15:22.782481 1114148 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 12:15:22.782637 1114148 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 12:15:22.782710 1114148 cni.go:84] Creating CNI manager for ""
	I0318 12:15:22.782724 1114148 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:15:22.782732 1114148 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 12:15:22.782779 1114148 start.go:340] cluster config:
	{Name:download-only-059065 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-059065 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:15:22.782931 1114148 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:15:22.784501 1114148 out.go:97] Downloading VM boot image ...
	I0318 12:15:22.784537 1114148 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0318 12:15:32.652280 1114148 out.go:97] Starting "download-only-059065" primary control-plane node in "download-only-059065" cluster
	I0318 12:15:32.652320 1114148 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 12:15:32.759420 1114148 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 12:15:32.759464 1114148 cache.go:56] Caching tarball of preloaded images
	I0318 12:15:32.759655 1114148 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 12:15:32.761525 1114148 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0318 12:15:32.761547 1114148 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0318 12:15:32.868503 1114148 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0318 12:15:46.264882 1114148 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0318 12:15:46.264977 1114148 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0318 12:15:47.174360 1114148 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0318 12:15:47.174727 1114148 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/download-only-059065/config.json ...
	I0318 12:15:47.174763 1114148 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/download-only-059065/config.json: {Name:mkf02610c5e4699f00a8408749dae78520699c02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:15:47.174929 1114148 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0318 12:15:47.175088 1114148 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-059065 host does not exist
	  To start a cluster, run: "minikube start -p download-only-059065"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-059065
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (15.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-222661 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-222661 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (15.081041844s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (15.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-222661
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-222661: exit status 85 (75.50818ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-059065 | jenkins | v1.32.0 | 18 Mar 24 12:15 UTC |                     |
	|         | -p download-only-059065        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 18 Mar 24 12:15 UTC | 18 Mar 24 12:15 UTC |
	| delete  | -p download-only-059065        | download-only-059065 | jenkins | v1.32.0 | 18 Mar 24 12:15 UTC | 18 Mar 24 12:15 UTC |
	| start   | -o=json --download-only        | download-only-222661 | jenkins | v1.32.0 | 18 Mar 24 12:15 UTC |                     |
	|         | -p download-only-222661        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:15:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:15:51.901716 1114360 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:15:51.901971 1114360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:15:51.901979 1114360 out.go:304] Setting ErrFile to fd 2...
	I0318 12:15:51.901984 1114360 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:15:51.902180 1114360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:15:51.902795 1114360 out.go:298] Setting JSON to true
	I0318 12:15:51.903827 1114360 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14299,"bootTime":1710749853,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:15:51.903898 1114360 start.go:139] virtualization: kvm guest
	I0318 12:15:51.906199 1114360 out.go:97] [download-only-222661] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:15:51.907890 1114360 out.go:169] MINIKUBE_LOCATION=18429
	I0318 12:15:51.906353 1114360 notify.go:220] Checking for updates...
	I0318 12:15:51.910717 1114360 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:15:51.912486 1114360 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:15:51.913826 1114360 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:15:51.914993 1114360 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 12:15:51.917351 1114360 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 12:15:51.917613 1114360 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:15:51.949787 1114360 out.go:97] Using the kvm2 driver based on user configuration
	I0318 12:15:51.949816 1114360 start.go:297] selected driver: kvm2
	I0318 12:15:51.949830 1114360 start.go:901] validating driver "kvm2" against <nil>
	I0318 12:15:51.950161 1114360 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:15:51.950231 1114360 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:15:51.965318 1114360 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:15:51.965373 1114360 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:15:51.965840 1114360 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 12:15:51.965978 1114360 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 12:15:51.966042 1114360 cni.go:84] Creating CNI manager for ""
	I0318 12:15:51.966055 1114360 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:15:51.966063 1114360 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 12:15:51.966121 1114360 start.go:340] cluster config:
	{Name:download-only-222661 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-222661 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:15:51.966212 1114360 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:15:51.967754 1114360 out.go:97] Starting "download-only-222661" primary control-plane node in "download-only-222661" cluster
	I0318 12:15:51.967771 1114360 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:15:52.075423 1114360 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0318 12:15:52.075458 1114360 cache.go:56] Caching tarball of preloaded images
	I0318 12:15:52.075631 1114360 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0318 12:15:52.077349 1114360 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0318 12:15:52.077364 1114360 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0318 12:15:52.185511 1114360 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-222661 host does not exist
	  To start a cluster, run: "minikube start -p download-only-222661"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-222661
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (45.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-508209 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-508209 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (45.951105219s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (45.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-508209
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-508209: exit status 85 (77.628472ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-059065 | jenkins | v1.32.0 | 18 Mar 24 12:15 UTC |                     |
	|         | -p download-only-059065           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 12:15 UTC | 18 Mar 24 12:15 UTC |
	| delete  | -p download-only-059065           | download-only-059065 | jenkins | v1.32.0 | 18 Mar 24 12:15 UTC | 18 Mar 24 12:15 UTC |
	| start   | -o=json --download-only           | download-only-222661 | jenkins | v1.32.0 | 18 Mar 24 12:15 UTC |                     |
	|         | -p download-only-222661           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC | 18 Mar 24 12:16 UTC |
	| delete  | -p download-only-222661           | download-only-222661 | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC | 18 Mar 24 12:16 UTC |
	| start   | -o=json --download-only           | download-only-508209 | jenkins | v1.32.0 | 18 Mar 24 12:16 UTC |                     |
	|         | -p download-only-508209           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/18 12:16:07
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0318 12:16:07.345929 1114539 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:16:07.346236 1114539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:16:07.346247 1114539 out.go:304] Setting ErrFile to fd 2...
	I0318 12:16:07.346251 1114539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:16:07.346432 1114539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:16:07.347020 1114539 out.go:298] Setting JSON to true
	I0318 12:16:07.348044 1114539 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":14314,"bootTime":1710749853,"procs":300,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:16:07.348110 1114539 start.go:139] virtualization: kvm guest
	I0318 12:16:07.350080 1114539 out.go:97] [download-only-508209] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:16:07.351579 1114539 out.go:169] MINIKUBE_LOCATION=18429
	I0318 12:16:07.350303 1114539 notify.go:220] Checking for updates...
	I0318 12:16:07.354303 1114539 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:16:07.355701 1114539 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:16:07.357017 1114539 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:16:07.358234 1114539 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0318 12:16:07.360742 1114539 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0318 12:16:07.360982 1114539 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:16:07.392813 1114539 out.go:97] Using the kvm2 driver based on user configuration
	I0318 12:16:07.392840 1114539 start.go:297] selected driver: kvm2
	I0318 12:16:07.392856 1114539 start.go:901] validating driver "kvm2" against <nil>
	I0318 12:16:07.393167 1114539 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:16:07.393247 1114539 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18429-1106816/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0318 12:16:07.408235 1114539 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0318 12:16:07.408297 1114539 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0318 12:16:07.408774 1114539 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0318 12:16:07.408953 1114539 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0318 12:16:07.409011 1114539 cni.go:84] Creating CNI manager for ""
	I0318 12:16:07.409022 1114539 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0318 12:16:07.409030 1114539 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0318 12:16:07.409081 1114539 start.go:340] cluster config:
	{Name:download-only-508209 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-508209 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:16:07.409176 1114539 iso.go:125] acquiring lock: {Name:mke5f9989ad60de6f54f25c411af7da9f3932a4d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0318 12:16:07.411079 1114539 out.go:97] Starting "download-only-508209" primary control-plane node in "download-only-508209" cluster
	I0318 12:16:07.411092 1114539 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 12:16:07.519404 1114539 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 12:16:07.519442 1114539 cache.go:56] Caching tarball of preloaded images
	I0318 12:16:07.519599 1114539 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 12:16:07.521561 1114539 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0318 12:16:07.521587 1114539 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0318 12:16:07.627864 1114539 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0318 12:16:18.591209 1114539 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0318 12:16:18.591308 1114539 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0318 12:16:19.352613 1114539 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0318 12:16:19.352980 1114539 profile.go:142] Saving config to /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/download-only-508209/config.json ...
	I0318 12:16:19.353032 1114539 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/download-only-508209/config.json: {Name:mk794d367886a896c7d5486817a19baea8f9373c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0318 12:16:19.353198 1114539 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0318 12:16:19.353326 1114539 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18429-1106816/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-508209 host does not exist
	  To start a cluster, run: "minikube start -p download-only-508209"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-508209
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-012709 --alsologtostderr --binary-mirror http://127.0.0.1:35873 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-012709" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-012709
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (90.31s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-542282 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-542282 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m29.213946393s)
helpers_test.go:175: Cleaning up "offline-crio-542282" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-542282
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-542282: (1.091051985s)
--- PASS: TestOffline (90.31s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-015389
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-015389: exit status 85 (67.314056ms)

                                                
                                                
-- stdout --
	* Profile "addons-015389" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-015389"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-015389
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-015389: exit status 85 (66.051588ms)

                                                
                                                
-- stdout --
	* Profile "addons-015389" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-015389"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (155.61s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-015389 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-015389 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m35.614228815s)
--- PASS: TestAddons/Setup (155.61s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 39.526188ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-84z6v" [f64607a4-6b93-41a9-847c-302045deae1e] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006981875s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-6vjv7" [7560be1a-4c63-4d89-9c1d-4654710cb74a] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006878682s
addons_test.go:340: (dbg) Run:  kubectl --context addons-015389 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-015389 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-015389 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.139453906s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 ip
2024/03/18 12:19:46 [DEBUG] GET http://192.168.39.94:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-linux-amd64 -p addons-015389 addons disable registry --alsologtostderr -v=1: (1.027797375s)
--- PASS: TestAddons/parallel/Registry (17.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (16.06s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2j7vq" [9e3a5c37-ad1e-4713-b573-1ca6c641b542] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00493883s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-015389
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-015389: (11.059088987s)
--- PASS: TestAddons/parallel/InspektorGadget (16.06s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.86s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 39.618644ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-c6t4v" [14a9220e-3a89-4062-b0df-279973f4adc4] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.005579079s
addons_test.go:415: (dbg) Run:  kubectl --context addons-015389 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.86s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.8s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.02224ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-b7fbk" [d35fb47f-9f63-4399-a25d-e920631e2d0a] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.075437907s
addons_test.go:473: (dbg) Run:  kubectl --context addons-015389 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-015389 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.055793364s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (75.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 48.117687ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-015389 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-015389 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [354d48f1-bbed-41fe-b699-7b21de54f2c8] Pending
helpers_test.go:344: "task-pv-pod" [354d48f1-bbed-41fe-b699-7b21de54f2c8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [354d48f1-bbed-41fe-b699-7b21de54f2c8] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.004041548s
addons_test.go:584: (dbg) Run:  kubectl --context addons-015389 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-015389 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-015389 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-015389 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-015389 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-015389 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-015389 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b10a19d9-b866-4708-a6ad-a50e21bad3bf] Pending
helpers_test.go:344: "task-pv-pod-restore" [b10a19d9-b866-4708-a6ad-a50e21bad3bf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b10a19d9-b866-4708-a6ad-a50e21bad3bf] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004695718s
addons_test.go:626: (dbg) Run:  kubectl --context addons-015389 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-015389 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-015389 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-015389 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.858114578s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (75.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-015389 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-015389 --alsologtostderr -v=1: (1.393926027s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-dldvc" [1f47d369-0ecf-4187-90d7-d83d291ad4c3] Pending
helpers_test.go:344: "headlamp-5485c556b-dldvc" [1f47d369-0ecf-4187-90d7-d83d291ad4c3] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-dldvc" [1f47d369-0ecf-4187-90d7-d83d291ad4c3] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.00504981s
--- PASS: TestAddons/parallel/Headlamp (15.40s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-wbhn7" [8a040759-6dc9-4806-9a02-63a121d418e4] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004213044s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-015389
addons_test.go:860: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-015389: (1.112633636s)
--- PASS: TestAddons/parallel/CloudSpanner (6.12s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.87s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-015389 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-015389 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-015389 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2185ffd8-f00f-416a-8f2f-eaff3d126c5a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2185ffd8-f00f-416a-8f2f-eaff3d126c5a] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2185ffd8-f00f-416a-8f2f-eaff3d126c5a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.007014218s
addons_test.go:891: (dbg) Run:  kubectl --context addons-015389 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 ssh "cat /opt/local-path-provisioner/pvc-97b8889f-377d-4dcd-aaa5-16575540db1e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-015389 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-015389 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-015389 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-015389 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.02732755s)
--- PASS: TestAddons/parallel/LocalPath (57.87s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rpkgk" [a76e38d5-f838-4358-b060-2fa48dc532cf] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005292829s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-015389
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-c75dl" [5d0ea5b5-2aa5-4352-8953-b4ecce9ad581] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00365049s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-015389 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-015389 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (55.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-959907 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-959907 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (53.997698035s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-959907 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-959907 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-959907 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-959907" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-959907
--- PASS: TestCertOptions (55.37s)

                                                
                                    
x
+
TestCertExpiration (308.25s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-537883 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0318 13:36:07.954013 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 13:36:24.906882 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-537883 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m26.690337515s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-537883 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-537883 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.555074468s)
helpers_test.go:175: Cleaning up "cert-expiration-537883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-537883
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-537883: (1.006772362s)
--- PASS: TestCertExpiration (308.25s)

                                                
                                    
x
+
TestForceSystemdFlag (77.73s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-042940 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-042940 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.423671138s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-042940 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-042940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-042940
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-042940: (1.076597372s)
--- PASS: TestForceSystemdFlag (77.73s)

                                                
                                    
x
+
TestForceSystemdEnv (60.66s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-375732 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-375732 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.641430257s)
helpers_test.go:175: Cleaning up "force-systemd-env-375732" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-375732
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-375732: (1.022901295s)
--- PASS: TestForceSystemdEnv (60.66s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.62s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.62s)

                                                
                                    
x
+
TestErrorSpam/setup (46.37s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-495198 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-495198 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-495198 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-495198 --driver=kvm2  --container-runtime=crio: (46.373905768s)
--- PASS: TestErrorSpam/setup (46.37s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 unpause
--- PASS: TestErrorSpam/unpause (1.75s)

                                                
                                    
x
+
TestErrorSpam/stop (5.23s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 stop: (2.312325167s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 stop: (1.447611758s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-495198 --log_dir /tmp/nospam-495198 stop: (1.466432553s)
--- PASS: TestErrorSpam/stop (5.23s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18429-1106816/.minikube/files/etc/test/nested/copy/1114136/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.54s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377562 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-377562 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m37.537989576s)
--- PASS: TestFunctional/serial/StartWithProxy (97.54s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (383s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377562 --alsologtostderr -v=8
E0318 12:29:30.300413 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:30.306279 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:30.316521 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:30.336791 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:30.377088 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:30.457407 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:30.617882 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:30.938531 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:31.579563 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:32.860218 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:35.422033 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:40.542628 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:29:50.783233 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:30:11.263690 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:30:52.224626 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:32:14.145425 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:34:30.297480 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-377562 --alsologtostderr -v=8: (6m23.002211998s)
functional_test.go:659: soft start took 6m23.003076616s for "functional-377562" cluster.
--- PASS: TestFunctional/serial/SoftStart (383.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-377562 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 cache add registry.k8s.io/pause:3.1: (1.020646384s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 cache add registry.k8s.io/pause:3.3: (1.085090728s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 cache add registry.k8s.io/pause:latest: (1.061963648s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-377562 /tmp/TestFunctionalserialCacheCmdcacheadd_local2734740525/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 cache add minikube-local-cache-test:functional-377562
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 cache add minikube-local-cache-test:functional-377562: (2.335463272s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 cache delete minikube-local-cache-test:functional-377562
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-377562
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377562 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (224.041511ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 kubectl -- --context functional-377562 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-377562 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (396.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377562 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0318 12:34:57.987082 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:39:30.300632 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-377562 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (6m36.360592492s)
functional_test.go:757: restart took 6m36.360770043s for "functional-377562" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (396.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-377562 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 logs: (1.252189491s)
--- PASS: TestFunctional/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 logs --file /tmp/TestFunctionalserialLogsFileCmd3836367497/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 logs --file /tmp/TestFunctionalserialLogsFileCmd3836367497/001/logs.txt: (1.288729306s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.63s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-377562 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-377562
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-377562: exit status 115 (298.894957ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.224:31127 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-377562 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-377562 delete -f testdata/invalidsvc.yaml: (1.127204817s)
--- PASS: TestFunctional/serial/InvalidService (4.63s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377562 config get cpus: exit status 14 (68.426394ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377562 config get cpus: exit status 14 (61.557015ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377562 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-377562 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (156.154399ms)

                                                
                                                
-- stdout --
	* [functional-377562] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:41:53.513367 1124303 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:41:53.513476 1124303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:41:53.513488 1124303 out.go:304] Setting ErrFile to fd 2...
	I0318 12:41:53.513494 1124303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:41:53.513725 1124303 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:41:53.514317 1124303 out.go:298] Setting JSON to false
	I0318 12:41:53.515309 1124303 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":15860,"bootTime":1710749853,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:41:53.515385 1124303 start.go:139] virtualization: kvm guest
	I0318 12:41:53.517942 1124303 out.go:177] * [functional-377562] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0318 12:41:53.519409 1124303 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 12:41:53.519462 1124303 notify.go:220] Checking for updates...
	I0318 12:41:53.520791 1124303 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:41:53.522206 1124303 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:41:53.523537 1124303 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:41:53.524943 1124303 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 12:41:53.526296 1124303 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:41:53.528205 1124303 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:41:53.528833 1124303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:41:53.528898 1124303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:41:53.546698 1124303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42801
	I0318 12:41:53.547091 1124303 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:41:53.547735 1124303 main.go:141] libmachine: Using API Version  1
	I0318 12:41:53.547764 1124303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:41:53.548144 1124303 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:41:53.548388 1124303 main.go:141] libmachine: (functional-377562) Calling .DriverName
	I0318 12:41:53.548698 1124303 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:41:53.549105 1124303 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:41:53.549146 1124303 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:41:53.563245 1124303 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
	I0318 12:41:53.563743 1124303 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:41:53.564248 1124303 main.go:141] libmachine: Using API Version  1
	I0318 12:41:53.564284 1124303 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:41:53.564649 1124303 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:41:53.564847 1124303 main.go:141] libmachine: (functional-377562) Calling .DriverName
	I0318 12:41:53.601422 1124303 out.go:177] * Using the kvm2 driver based on existing profile
	I0318 12:41:53.602673 1124303 start.go:297] selected driver: kvm2
	I0318 12:41:53.602698 1124303 start.go:901] validating driver "kvm2" against &{Name:functional-377562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-377562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:41:53.602833 1124303 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:41:53.604906 1124303 out.go:177] 
	W0318 12:41:53.606138 1124303 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0318 12:41:53.607286 1124303 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377562 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377562 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-377562 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (151.167689ms)

                                                
                                                
-- stdout --
	* [functional-377562] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 12:41:52.468079 1124043 out.go:291] Setting OutFile to fd 1 ...
	I0318 12:41:52.468219 1124043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:41:52.468229 1124043 out.go:304] Setting ErrFile to fd 2...
	I0318 12:41:52.468233 1124043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 12:41:52.468570 1124043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 12:41:52.469118 1124043 out.go:298] Setting JSON to false
	I0318 12:41:52.470148 1124043 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":15859,"bootTime":1710749853,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0318 12:41:52.470222 1124043 start.go:139] virtualization: kvm guest
	I0318 12:41:52.472544 1124043 out.go:177] * [functional-377562] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0318 12:41:52.474139 1124043 out.go:177]   - MINIKUBE_LOCATION=18429
	I0318 12:41:52.475557 1124043 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0318 12:41:52.474148 1124043 notify.go:220] Checking for updates...
	I0318 12:41:52.478679 1124043 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	I0318 12:41:52.479947 1124043 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	I0318 12:41:52.481225 1124043 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0318 12:41:52.482562 1124043 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0318 12:41:52.484423 1124043 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 12:41:52.485061 1124043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:41:52.485142 1124043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:41:52.499962 1124043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38791
	I0318 12:41:52.500406 1124043 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:41:52.501048 1124043 main.go:141] libmachine: Using API Version  1
	I0318 12:41:52.501074 1124043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:41:52.501426 1124043 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:41:52.501596 1124043 main.go:141] libmachine: (functional-377562) Calling .DriverName
	I0318 12:41:52.501853 1124043 driver.go:392] Setting default libvirt URI to qemu:///system
	I0318 12:41:52.502239 1124043 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 12:41:52.502300 1124043 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 12:41:52.516699 1124043 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43843
	I0318 12:41:52.517171 1124043 main.go:141] libmachine: () Calling .GetVersion
	I0318 12:41:52.517735 1124043 main.go:141] libmachine: Using API Version  1
	I0318 12:41:52.517764 1124043 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 12:41:52.518120 1124043 main.go:141] libmachine: () Calling .GetMachineName
	I0318 12:41:52.518308 1124043 main.go:141] libmachine: (functional-377562) Calling .DriverName
	I0318 12:41:52.550853 1124043 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0318 12:41:52.552029 1124043 start.go:297] selected driver: kvm2
	I0318 12:41:52.552052 1124043 start.go:901] validating driver "kvm2" against &{Name:functional-377562 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-377562 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.224 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0318 12:41:52.552175 1124043 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0318 12:41:52.554292 1124043 out.go:177] 
	W0318 12:41:52.555903 1124043 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0318 12:41:52.557214 1124043 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (26.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-377562 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-377562 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-vwqbg" [70bcdcd5-960f-4c1a-89fa-2cbebecf47a0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-vwqbg" [70bcdcd5-960f-4c1a-89fa-2cbebecf47a0] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 26.005143293s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.224:32057
functional_test.go:1671: http://192.168.39.224:32057: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-vwqbg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.224:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.224:32057
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (26.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [b1e8be6e-113b-4cba-b714-60d76790deb9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.01170252s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-377562 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-377562 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-377562 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-377562 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-377562 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [45689916-7ca8-4bcd-9828-fd86c56f79c6] Pending
helpers_test.go:344: "sp-pod" [45689916-7ca8-4bcd-9828-fd86c56f79c6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [45689916-7ca8-4bcd-9828-fd86c56f79c6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 26.003994633s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-377562 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-377562 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-377562 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e146994e-4d62-4c33-be2b-f5a2163c57ca] Pending
helpers_test.go:344: "sp-pod" [e146994e-4d62-4c33-be2b-f5a2163c57ca] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e146994e-4d62-4c33-be2b-f5a2163c57ca] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004885611s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-377562 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.14s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh -n functional-377562 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 cp functional-377562:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2900485417/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh -n functional-377562 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh -n functional-377562 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-377562 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-jqn8r" [4eed63c8-b398-4c01-b45f-38022afbc70e] Pending
helpers_test.go:344: "mysql-859648c796-jqn8r" [4eed63c8-b398-4c01-b45f-38022afbc70e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-jqn8r" [4eed63c8-b398-4c01-b45f-38022afbc70e] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.005567248s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-377562 exec mysql-859648c796-jqn8r -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-377562 exec mysql-859648c796-jqn8r -- mysql -ppassword -e "show databases;": exit status 1 (185.188265ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-377562 exec mysql-859648c796-jqn8r -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-377562 exec mysql-859648c796-jqn8r -- mysql -ppassword -e "show databases;": exit status 1 (235.497069ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-377562 exec mysql-859648c796-jqn8r -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.66s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1114136/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "sudo cat /etc/test/nested/copy/1114136/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1114136.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "sudo cat /etc/ssl/certs/1114136.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1114136.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "sudo cat /usr/share/ca-certificates/1114136.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/11141362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "sudo cat /etc/ssl/certs/11141362.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/11141362.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "sudo cat /usr/share/ca-certificates/11141362.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-377562 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377562 ssh "sudo systemctl is-active docker": exit status 1 (250.912553ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377562 ssh "sudo systemctl is-active containerd": exit status 1 (245.001713ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 version --short
--- PASS: TestFunctional/parallel/Version/short (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377562 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-377562
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-377562
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377562 image ls --format short --alsologtostderr:
I0318 12:41:56.105683 1124757 out.go:291] Setting OutFile to fd 1 ...
I0318 12:41:56.105887 1124757 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:56.105917 1124757 out.go:304] Setting ErrFile to fd 2...
I0318 12:41:56.105931 1124757 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:56.106475 1124757 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
I0318 12:41:56.107493 1124757 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:56.107609 1124757 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:56.107975 1124757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:56.108021 1124757 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:56.123207 1124757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42939
I0318 12:41:56.123689 1124757 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:56.124362 1124757 main.go:141] libmachine: Using API Version  1
I0318 12:41:56.124400 1124757 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:56.124767 1124757 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:56.124969 1124757 main.go:141] libmachine: (functional-377562) Calling .GetState
I0318 12:41:56.126938 1124757 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:56.126992 1124757 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:56.142763 1124757 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33377
I0318 12:41:56.143242 1124757 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:56.143743 1124757 main.go:141] libmachine: Using API Version  1
I0318 12:41:56.143773 1124757 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:56.144112 1124757 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:56.144303 1124757 main.go:141] libmachine: (functional-377562) Calling .DriverName
I0318 12:41:56.144508 1124757 ssh_runner.go:195] Run: systemctl --version
I0318 12:41:56.144537 1124757 main.go:141] libmachine: (functional-377562) Calling .GetSSHHostname
I0318 12:41:56.147172 1124757 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:56.147624 1124757 main.go:141] libmachine: (functional-377562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:00:d6", ip: ""} in network mk-functional-377562: {Iface:virbr1 ExpiryTime:2024-03-18 13:26:47 +0000 UTC Type:0 Mac:52:54:00:22:00:d6 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-377562 Clientid:01:52:54:00:22:00:d6}
I0318 12:41:56.147656 1124757 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined IP address 192.168.39.224 and MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:56.147863 1124757 main.go:141] libmachine: (functional-377562) Calling .GetSSHPort
I0318 12:41:56.148061 1124757 main.go:141] libmachine: (functional-377562) Calling .GetSSHKeyPath
I0318 12:41:56.148271 1124757 main.go:141] libmachine: (functional-377562) Calling .GetSSHUsername
I0318 12:41:56.148458 1124757 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/functional-377562/id_rsa Username:docker}
I0318 12:41:56.276376 1124757 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 12:41:56.383613 1124757 main.go:141] libmachine: Making call to close driver server
I0318 12:41:56.383630 1124757 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:41:56.383964 1124757 main.go:141] libmachine: (functional-377562) DBG | Closing plugin on server side
I0318 12:41:56.383967 1124757 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:41:56.384004 1124757 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 12:41:56.384014 1124757 main.go:141] libmachine: Making call to close driver server
I0318 12:41:56.384023 1124757 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:41:56.384273 1124757 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:41:56.384291 1124757 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377562 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| gcr.io/google-containers/addon-resizer  | functional-377562  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-377562  | d4595e4b6b06f | 3.35kB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377562 image ls --format table --alsologtostderr:
I0318 12:41:59.017902 1125021 out.go:291] Setting OutFile to fd 1 ...
I0318 12:41:59.018518 1125021 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:59.018545 1125021 out.go:304] Setting ErrFile to fd 2...
I0318 12:41:59.018555 1125021 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:59.018794 1125021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
I0318 12:41:59.019415 1125021 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:59.019520 1125021 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:59.019903 1125021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:59.019943 1125021 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:59.035219 1125021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43683
I0318 12:41:59.035721 1125021 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:59.036354 1125021 main.go:141] libmachine: Using API Version  1
I0318 12:41:59.036383 1125021 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:59.036815 1125021 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:59.037012 1125021 main.go:141] libmachine: (functional-377562) Calling .GetState
I0318 12:41:59.039045 1125021 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:59.039089 1125021 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:59.053800 1125021 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39025
I0318 12:41:59.054308 1125021 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:59.054868 1125021 main.go:141] libmachine: Using API Version  1
I0318 12:41:59.054893 1125021 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:59.055170 1125021 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:59.055411 1125021 main.go:141] libmachine: (functional-377562) Calling .DriverName
I0318 12:41:59.055634 1125021 ssh_runner.go:195] Run: systemctl --version
I0318 12:41:59.055666 1125021 main.go:141] libmachine: (functional-377562) Calling .GetSSHHostname
I0318 12:41:59.058408 1125021 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:59.058815 1125021 main.go:141] libmachine: (functional-377562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:00:d6", ip: ""} in network mk-functional-377562: {Iface:virbr1 ExpiryTime:2024-03-18 13:26:47 +0000 UTC Type:0 Mac:52:54:00:22:00:d6 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-377562 Clientid:01:52:54:00:22:00:d6}
I0318 12:41:59.058850 1125021 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined IP address 192.168.39.224 and MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:59.058980 1125021 main.go:141] libmachine: (functional-377562) Calling .GetSSHPort
I0318 12:41:59.059170 1125021 main.go:141] libmachine: (functional-377562) Calling .GetSSHKeyPath
I0318 12:41:59.059323 1125021 main.go:141] libmachine: (functional-377562) Calling .GetSSHUsername
I0318 12:41:59.059469 1125021 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/functional-377562/id_rsa Username:docker}
I0318 12:41:59.194743 1125021 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 12:41:59.258416 1125021 main.go:141] libmachine: Making call to close driver server
I0318 12:41:59.258439 1125021 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:41:59.258780 1125021 main.go:141] libmachine: (functional-377562) DBG | Closing plugin on server side
I0318 12:41:59.258831 1125021 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:41:59.258840 1125021 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 12:41:59.258849 1125021 main.go:141] libmachine: Making call to close driver server
I0318 12:41:59.258857 1125021 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:41:59.259148 1125021 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:41:59.259194 1125021 main.go:141] libmachine: (functional-377562) DBG | Closing plugin on server side
I0318 12:41:59.259200 1125021 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377562 image ls --format json --alsologtostderr:
[{"id":"d4595e4b6b06f641a7234402755502c723fdb2c5ef88a3dfc13c2b3ac7063814","repoDigests":["localhost/minikube-local-cache-test@sha256:e1c901fb2a54ff86cf20f528de6df008a686e9ad98e272360b7f80ee949dcbb9"],"repoTags":["localhost/minikube-local-cache-test:functional-377562"],"size":"3345"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"6e38f40d628db3002f5617342c
8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32
298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/e
choserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.
io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-377562"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busyb
ox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377562 image ls --format json --alsologtostderr:
I0318 12:41:58.717412 1124986 out.go:291] Setting OutFile to fd 1 ...
I0318 12:41:58.717544 1124986 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:58.717552 1124986 out.go:304] Setting ErrFile to fd 2...
I0318 12:41:58.717559 1124986 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:58.717763 1124986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
I0318 12:41:58.718392 1124986 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:58.718545 1124986 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:58.718996 1124986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:58.719058 1124986 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:58.735012 1124986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34759
I0318 12:41:58.735577 1124986 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:58.736199 1124986 main.go:141] libmachine: Using API Version  1
I0318 12:41:58.736226 1124986 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:58.736659 1124986 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:58.736889 1124986 main.go:141] libmachine: (functional-377562) Calling .GetState
I0318 12:41:58.738784 1124986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:58.738837 1124986 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:58.754301 1124986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46687
I0318 12:41:58.754728 1124986 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:58.755277 1124986 main.go:141] libmachine: Using API Version  1
I0318 12:41:58.755325 1124986 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:58.755669 1124986 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:58.755939 1124986 main.go:141] libmachine: (functional-377562) Calling .DriverName
I0318 12:41:58.756159 1124986 ssh_runner.go:195] Run: systemctl --version
I0318 12:41:58.756196 1124986 main.go:141] libmachine: (functional-377562) Calling .GetSSHHostname
I0318 12:41:58.758981 1124986 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:58.759462 1124986 main.go:141] libmachine: (functional-377562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:00:d6", ip: ""} in network mk-functional-377562: {Iface:virbr1 ExpiryTime:2024-03-18 13:26:47 +0000 UTC Type:0 Mac:52:54:00:22:00:d6 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-377562 Clientid:01:52:54:00:22:00:d6}
I0318 12:41:58.759497 1124986 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined IP address 192.168.39.224 and MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:58.759624 1124986 main.go:141] libmachine: (functional-377562) Calling .GetSSHPort
I0318 12:41:58.759765 1124986 main.go:141] libmachine: (functional-377562) Calling .GetSSHKeyPath
I0318 12:41:58.759895 1124986 main.go:141] libmachine: (functional-377562) Calling .GetSSHUsername
I0318 12:41:58.760072 1124986 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/functional-377562/id_rsa Username:docker}
I0318 12:41:58.858111 1124986 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 12:41:58.947825 1124986 main.go:141] libmachine: Making call to close driver server
I0318 12:41:58.947850 1124986 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:41:58.948163 1124986 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:41:58.948190 1124986 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 12:41:58.948199 1124986 main.go:141] libmachine: Making call to close driver server
I0318 12:41:58.948208 1124986 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:41:58.948458 1124986 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:41:58.948477 1124986 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377562 image ls --format yaml --alsologtostderr:
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-377562
size: "34114467"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: d4595e4b6b06f641a7234402755502c723fdb2c5ef88a3dfc13c2b3ac7063814
repoDigests:
- localhost/minikube-local-cache-test@sha256:e1c901fb2a54ff86cf20f528de6df008a686e9ad98e272360b7f80ee949dcbb9
repoTags:
- localhost/minikube-local-cache-test:functional-377562
size: "3345"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377562 image ls --format yaml --alsologtostderr:
I0318 12:41:56.451403 1124811 out.go:291] Setting OutFile to fd 1 ...
I0318 12:41:56.451537 1124811 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:56.451548 1124811 out.go:304] Setting ErrFile to fd 2...
I0318 12:41:56.451555 1124811 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:56.451768 1124811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
I0318 12:41:56.452363 1124811 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:56.452493 1124811 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:56.452922 1124811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:56.452973 1124811 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:56.470193 1124811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35291
I0318 12:41:56.470662 1124811 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:56.471281 1124811 main.go:141] libmachine: Using API Version  1
I0318 12:41:56.471311 1124811 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:56.471711 1124811 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:56.471966 1124811 main.go:141] libmachine: (functional-377562) Calling .GetState
I0318 12:41:56.473862 1124811 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:56.473914 1124811 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:56.491552 1124811 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33969
I0318 12:41:56.492004 1124811 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:56.492626 1124811 main.go:141] libmachine: Using API Version  1
I0318 12:41:56.492684 1124811 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:56.493100 1124811 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:56.493349 1124811 main.go:141] libmachine: (functional-377562) Calling .DriverName
I0318 12:41:56.493587 1124811 ssh_runner.go:195] Run: systemctl --version
I0318 12:41:56.493620 1124811 main.go:141] libmachine: (functional-377562) Calling .GetSSHHostname
I0318 12:41:56.496601 1124811 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:56.497080 1124811 main.go:141] libmachine: (functional-377562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:00:d6", ip: ""} in network mk-functional-377562: {Iface:virbr1 ExpiryTime:2024-03-18 13:26:47 +0000 UTC Type:0 Mac:52:54:00:22:00:d6 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-377562 Clientid:01:52:54:00:22:00:d6}
I0318 12:41:56.497108 1124811 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined IP address 192.168.39.224 and MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:56.497278 1124811 main.go:141] libmachine: (functional-377562) Calling .GetSSHPort
I0318 12:41:56.497468 1124811 main.go:141] libmachine: (functional-377562) Calling .GetSSHKeyPath
I0318 12:41:56.497641 1124811 main.go:141] libmachine: (functional-377562) Calling .GetSSHUsername
I0318 12:41:56.497832 1124811 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/functional-377562/id_rsa Username:docker}
I0318 12:41:56.587643 1124811 ssh_runner.go:195] Run: sudo crictl images --output json
I0318 12:41:56.645780 1124811 main.go:141] libmachine: Making call to close driver server
I0318 12:41:56.645793 1124811 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:41:56.646039 1124811 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:41:56.646068 1124811 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 12:41:56.646078 1124811 main.go:141] libmachine: Making call to close driver server
I0318 12:41:56.646086 1124811 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:41:56.646320 1124811 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:41:56.646336 1124811 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 12:41:56.646382 1124811 main.go:141] libmachine: (functional-377562) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377562 ssh pgrep buildkitd: exit status 1 (221.622225ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image build -t localhost/my-image:functional-377562 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 image build -t localhost/my-image:functional-377562 testdata/build --alsologtostderr: (3.342262726s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377562 image build -t localhost/my-image:functional-377562 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 6b92c421ef2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-377562
--> 27a2fe5a38e
Successfully tagged localhost/my-image:functional-377562
27a2fe5a38ebb509ff5f15c8336fc2dc380574e62b87670b5bc50becacadc368
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377562 image build -t localhost/my-image:functional-377562 testdata/build --alsologtostderr:
I0318 12:41:56.930954 1124890 out.go:291] Setting OutFile to fd 1 ...
I0318 12:41:56.931084 1124890 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:56.931094 1124890 out.go:304] Setting ErrFile to fd 2...
I0318 12:41:56.931100 1124890 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0318 12:41:56.931276 1124890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
I0318 12:41:56.931904 1124890 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:56.932664 1124890 config.go:182] Loaded profile config "functional-377562": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0318 12:41:56.933026 1124890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:56.933063 1124890 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:56.948553 1124890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42693
I0318 12:41:56.949128 1124890 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:56.949727 1124890 main.go:141] libmachine: Using API Version  1
I0318 12:41:56.949749 1124890 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:56.950172 1124890 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:56.950336 1124890 main.go:141] libmachine: (functional-377562) Calling .GetState
I0318 12:41:56.952500 1124890 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0318 12:41:56.952567 1124890 main.go:141] libmachine: Launching plugin server for driver kvm2
I0318 12:41:56.967835 1124890 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39353
I0318 12:41:56.968350 1124890 main.go:141] libmachine: () Calling .GetVersion
I0318 12:41:56.968860 1124890 main.go:141] libmachine: Using API Version  1
I0318 12:41:56.968889 1124890 main.go:141] libmachine: () Calling .SetConfigRaw
I0318 12:41:56.969297 1124890 main.go:141] libmachine: () Calling .GetMachineName
I0318 12:41:56.969516 1124890 main.go:141] libmachine: (functional-377562) Calling .DriverName
I0318 12:41:56.969777 1124890 ssh_runner.go:195] Run: systemctl --version
I0318 12:41:56.969818 1124890 main.go:141] libmachine: (functional-377562) Calling .GetSSHHostname
I0318 12:41:56.972683 1124890 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:56.973137 1124890 main.go:141] libmachine: (functional-377562) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:22:00:d6", ip: ""} in network mk-functional-377562: {Iface:virbr1 ExpiryTime:2024-03-18 13:26:47 +0000 UTC Type:0 Mac:52:54:00:22:00:d6 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:functional-377562 Clientid:01:52:54:00:22:00:d6}
I0318 12:41:56.973165 1124890 main.go:141] libmachine: (functional-377562) DBG | domain functional-377562 has defined IP address 192.168.39.224 and MAC address 52:54:00:22:00:d6 in network mk-functional-377562
I0318 12:41:56.973334 1124890 main.go:141] libmachine: (functional-377562) Calling .GetSSHPort
I0318 12:41:56.973501 1124890 main.go:141] libmachine: (functional-377562) Calling .GetSSHKeyPath
I0318 12:41:56.973630 1124890 main.go:141] libmachine: (functional-377562) Calling .GetSSHUsername
I0318 12:41:56.973809 1124890 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/functional-377562/id_rsa Username:docker}
I0318 12:41:57.059738 1124890 build_images.go:161] Building image from path: /tmp/build.658736953.tar
I0318 12:41:57.059805 1124890 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0318 12:41:57.081977 1124890 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.658736953.tar
I0318 12:41:57.094168 1124890 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.658736953.tar: stat -c "%s %y" /var/lib/minikube/build/build.658736953.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.658736953.tar': No such file or directory
I0318 12:41:57.094213 1124890 ssh_runner.go:362] scp /tmp/build.658736953.tar --> /var/lib/minikube/build/build.658736953.tar (3072 bytes)
I0318 12:41:57.130621 1124890 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.658736953
I0318 12:41:57.144606 1124890 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.658736953 -xf /var/lib/minikube/build/build.658736953.tar
I0318 12:41:57.164381 1124890 crio.go:297] Building image: /var/lib/minikube/build/build.658736953
I0318 12:41:57.164477 1124890 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-377562 /var/lib/minikube/build/build.658736953 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0318 12:42:00.181960 1124890 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-377562 /var/lib/minikube/build/build.658736953 --cgroup-manager=cgroupfs: (3.017446043s)
I0318 12:42:00.182060 1124890 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.658736953
I0318 12:42:00.197827 1124890 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.658736953.tar
I0318 12:42:00.209608 1124890 build_images.go:217] Built localhost/my-image:functional-377562 from /tmp/build.658736953.tar
I0318 12:42:00.209654 1124890 build_images.go:133] succeeded building to: functional-377562
I0318 12:42:00.209660 1124890 build_images.go:134] failed building to: 
I0318 12:42:00.209703 1124890 main.go:141] libmachine: Making call to close driver server
I0318 12:42:00.209723 1124890 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:42:00.210073 1124890 main.go:141] libmachine: (functional-377562) DBG | Closing plugin on server side
I0318 12:42:00.210073 1124890 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:42:00.210104 1124890 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 12:42:00.210113 1124890 main.go:141] libmachine: Making call to close driver server
I0318 12:42:00.210122 1124890 main.go:141] libmachine: (functional-377562) Calling .Close
I0318 12:42:00.210373 1124890 main.go:141] libmachine: Successfully made call to close driver server
I0318 12:42:00.210391 1124890 main.go:141] libmachine: Making call to close connection to plugin binary
I0318 12:42:00.210432 1124890 main.go:141] libmachine: (functional-377562) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.069835112s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-377562
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "292.47021ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "74.148538ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "298.278395ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "113.611788ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (25.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-377562 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-377562 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-tj5fw" [397bcd4a-c19a-4699-8592-38466b9f477d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-tj5fw" [397bcd4a-c19a-4699-8592-38466b9f477d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 25.006219149s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (25.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image load --daemon gcr.io/google-containers/addon-resizer:functional-377562 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 image load --daemon gcr.io/google-containers/addon-resizer:functional-377562 --alsologtostderr: (3.440464365s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.07733323s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-377562
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image load --daemon gcr.io/google-containers/addon-resizer:functional-377562 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 image load --daemon gcr.io/google-containers/addon-resizer:functional-377562 --alsologtostderr: (4.378713515s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image save gcr.io/google-containers/addon-resizer:functional-377562 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 image save gcr.io/google-containers/addon-resizer:functional-377562 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.55551442s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image rm gcr.io/google-containers/addon-resizer:functional-377562 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.219998302s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-377562
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 image save --daemon gcr.io/google-containers/addon-resizer:functional-377562 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-377562 image save --daemon gcr.io/google-containers/addon-resizer:functional-377562 --alsologtostderr: (1.001005102s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-377562
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377562 /tmp/TestFunctionalparallelMountCmdany-port3942997611/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710765712934411624" to /tmp/TestFunctionalparallelMountCmdany-port3942997611/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710765712934411624" to /tmp/TestFunctionalparallelMountCmdany-port3942997611/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710765712934411624" to /tmp/TestFunctionalparallelMountCmdany-port3942997611/001/test-1710765712934411624
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377562 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (243.399905ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 18 12:41 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 18 12:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 18 12:41 test-1710765712934411624
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh cat /mount-9p/test-1710765712934411624
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-377562 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [fb628a15-8a79-4e4b-98e4-8725b1363bfd] Pending
helpers_test.go:344: "busybox-mount" [fb628a15-8a79-4e4b-98e4-8725b1363bfd] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [fb628a15-8a79-4e4b-98e4-8725b1363bfd] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [fb628a15-8a79-4e4b-98e4-8725b1363bfd] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.004916705s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-377562 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377562 /tmp/TestFunctionalparallelMountCmdany-port3942997611/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 service list -o json
functional_test.go:1490: Took "517.232005ms" to run "out/minikube-linux-amd64 -p functional-377562 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.224:30567
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.224:30567
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377562 /tmp/TestFunctionalparallelMountCmdspecific-port3906676000/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377562 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (263.33574ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377562 /tmp/TestFunctionalparallelMountCmdspecific-port3906676000/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377562 ssh "sudo umount -f /mount-9p": exit status 1 (282.969069ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-377562 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377562 /tmp/TestFunctionalparallelMountCmdspecific-port3906676000/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2270119902/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2270119902/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2270119902/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377562 ssh "findmnt -T" /mount1: exit status 1 (319.788062ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377562 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-377562 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2270119902/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2270119902/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377562 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2270119902/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.54s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-377562
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-377562
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-377562
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (241.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-328109 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0318 12:44:30.300601 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:45:53.347968 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 12:46:24.904987 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:46:24.910302 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:46:24.920609 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:46:24.940885 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:46:24.981186 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:46:25.061556 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:46:25.222021 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:46:25.543082 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:46:26.183406 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:46:27.463818 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:46:30.024791 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-328109 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m1.13047673s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (241.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- rollout status deployment/busybox
E0318 12:46:35.148520 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-328109 -- rollout status deployment/busybox: (4.799498202s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-fz4kl -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-gv6tf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-sx4mf -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-fz4kl -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-gv6tf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-sx4mf -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-fz4kl -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-gv6tf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-sx4mf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-fz4kl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-fz4kl -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-gv6tf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-gv6tf -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-sx4mf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-328109 -- exec busybox-5b5d89c9d6-sx4mf -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (49.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-328109 -v=7 --alsologtostderr
E0318 12:46:45.388747 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 12:47:05.869016 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-328109 -v=7 --alsologtostderr: (48.342947432s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (49.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-328109 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp testdata/cp-test.txt ha-328109:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1988805859/001/cp-test_ha-328109.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109:/home/docker/cp-test.txt ha-328109-m02:/home/docker/cp-test_ha-328109_ha-328109-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m02 "sudo cat /home/docker/cp-test_ha-328109_ha-328109-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109:/home/docker/cp-test.txt ha-328109-m03:/home/docker/cp-test_ha-328109_ha-328109-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m03 "sudo cat /home/docker/cp-test_ha-328109_ha-328109-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109:/home/docker/cp-test.txt ha-328109-m04:/home/docker/cp-test_ha-328109_ha-328109-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m04 "sudo cat /home/docker/cp-test_ha-328109_ha-328109-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp testdata/cp-test.txt ha-328109-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1988805859/001/cp-test_ha-328109-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m02:/home/docker/cp-test.txt ha-328109:/home/docker/cp-test_ha-328109-m02_ha-328109.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109 "sudo cat /home/docker/cp-test_ha-328109-m02_ha-328109.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m02:/home/docker/cp-test.txt ha-328109-m03:/home/docker/cp-test_ha-328109-m02_ha-328109-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m03 "sudo cat /home/docker/cp-test_ha-328109-m02_ha-328109-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m02:/home/docker/cp-test.txt ha-328109-m04:/home/docker/cp-test_ha-328109-m02_ha-328109-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m04 "sudo cat /home/docker/cp-test_ha-328109-m02_ha-328109-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp testdata/cp-test.txt ha-328109-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1988805859/001/cp-test_ha-328109-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt ha-328109:/home/docker/cp-test_ha-328109-m03_ha-328109.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109 "sudo cat /home/docker/cp-test_ha-328109-m03_ha-328109.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt ha-328109-m02:/home/docker/cp-test_ha-328109-m03_ha-328109-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m02 "sudo cat /home/docker/cp-test_ha-328109-m03_ha-328109-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m03:/home/docker/cp-test.txt ha-328109-m04:/home/docker/cp-test_ha-328109-m03_ha-328109-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m04 "sudo cat /home/docker/cp-test_ha-328109-m03_ha-328109-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp testdata/cp-test.txt ha-328109-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1988805859/001/cp-test_ha-328109-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt ha-328109:/home/docker/cp-test_ha-328109-m04_ha-328109.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109 "sudo cat /home/docker/cp-test_ha-328109-m04_ha-328109.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt ha-328109-m02:/home/docker/cp-test_ha-328109-m04_ha-328109-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m02 "sudo cat /home/docker/cp-test_ha-328109-m04_ha-328109-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 cp ha-328109-m04:/home/docker/cp-test.txt ha-328109-m03:/home/docker/cp-test_ha-328109-m04_ha-328109-m03.txt
E0318 12:47:46.829415 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 ssh -n ha-328109-m03 "sudo cat /home/docker/cp-test_ha-328109-m04_ha-328109-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.505649825s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-328109 node delete m03 -v=7 --alsologtostderr: (16.754023975s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (326.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-328109 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0318 13:01:24.906573 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 13:02:33.348575 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 13:02:47.952056 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 13:04:30.297510 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-328109 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m25.240048456s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (326.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-328109 --control-plane -v=7 --alsologtostderr
E0318 13:06:24.905566 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-328109 --control-plane -v=7 --alsologtostderr: (1m15.256657495s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-328109 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (58.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-807915 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-807915 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (58.93711218s)
--- PASS: TestJSONOutput/start/Command (58.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.81s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-807915 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.81s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-807915 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.44s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-807915 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-807915 --output=json --user=testUser: (7.440152606s)
--- PASS: TestJSONOutput/stop/Command (7.44s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-842919 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-842919 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.503894ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1fcf5823-6811-446c-9857-9ea151cf4eb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-842919] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"969ac092-fbeb-4aaf-9314-86735fb6cdeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18429"}}
	{"specversion":"1.0","id":"059c097a-24ff-443e-bd50-bc71cf7a59bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e640bfd6-05c8-4c54-a8ae-51b25ea4ff83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig"}}
	{"specversion":"1.0","id":"7ae9562b-0252-4844-8cf4-2b96ab838410","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube"}}
	{"specversion":"1.0","id":"53c41be4-d327-4c7f-8733-d21e8b9d5c9c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"153bdd1d-9801-40bb-8c77-2524f23340fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"420e8b67-1658-4eb8-98fe-4a93b896d89c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-842919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-842919
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (93.07s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-963026 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-963026 --driver=kvm2  --container-runtime=crio: (46.39800512s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-965892 --driver=kvm2  --container-runtime=crio
E0318 13:09:30.297453 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-965892 --driver=kvm2  --container-runtime=crio: (43.891688927s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-963026
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-965892
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-965892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-965892
helpers_test.go:175: Cleaning up "first-963026" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-963026
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-963026: (1.026613649s)
--- PASS: TestMinikubeProfile (93.07s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.74s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-170620 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-170620 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (28.73680223s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.74s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-170620 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-170620 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (32.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-188338 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-188338 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (31.170387328s)
--- PASS: TestMountStart/serial/StartWithMountSecond (32.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-188338 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-188338 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-170620 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-188338 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-188338 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.43s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-188338
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-188338: (1.425842108s)
--- PASS: TestMountStart/serial/Stop (1.43s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.26s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-188338
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-188338: (23.255384554s)
--- PASS: TestMountStart/serial/RestartStopped (24.26s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-188338 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-188338 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.41s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (110.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-229365 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0318 13:11:24.905534 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-229365 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.021138475s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (110.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-229365 -- rollout status deployment/busybox: (4.246798548s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- exec busybox-5b5d89c9d6-cc5z6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- exec busybox-5b5d89c9d6-pjdnm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- exec busybox-5b5d89c9d6-cc5z6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- exec busybox-5b5d89c9d6-pjdnm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- exec busybox-5b5d89c9d6-cc5z6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- exec busybox-5b5d89c9d6-pjdnm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- exec busybox-5b5d89c9d6-cc5z6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- exec busybox-5b5d89c9d6-cc5z6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- exec busybox-5b5d89c9d6-pjdnm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-229365 -- exec busybox-5b5d89c9d6-pjdnm -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-229365 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-229365 -v 3 --alsologtostderr: (45.597319438s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.20s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-229365 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp testdata/cp-test.txt multinode-229365:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp multinode-229365:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile690292982/001/cp-test_multinode-229365.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp multinode-229365:/home/docker/cp-test.txt multinode-229365-m02:/home/docker/cp-test_multinode-229365_multinode-229365-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m02 "sudo cat /home/docker/cp-test_multinode-229365_multinode-229365-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp multinode-229365:/home/docker/cp-test.txt multinode-229365-m03:/home/docker/cp-test_multinode-229365_multinode-229365-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m03 "sudo cat /home/docker/cp-test_multinode-229365_multinode-229365-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp testdata/cp-test.txt multinode-229365-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp multinode-229365-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile690292982/001/cp-test_multinode-229365-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp multinode-229365-m02:/home/docker/cp-test.txt multinode-229365:/home/docker/cp-test_multinode-229365-m02_multinode-229365.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365 "sudo cat /home/docker/cp-test_multinode-229365-m02_multinode-229365.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp multinode-229365-m02:/home/docker/cp-test.txt multinode-229365-m03:/home/docker/cp-test_multinode-229365-m02_multinode-229365-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m03 "sudo cat /home/docker/cp-test_multinode-229365-m02_multinode-229365-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp testdata/cp-test.txt multinode-229365-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp multinode-229365-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile690292982/001/cp-test_multinode-229365-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp multinode-229365-m03:/home/docker/cp-test.txt multinode-229365:/home/docker/cp-test_multinode-229365-m03_multinode-229365.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365 "sudo cat /home/docker/cp-test_multinode-229365-m03_multinode-229365.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 cp multinode-229365-m03:/home/docker/cp-test.txt multinode-229365-m02:/home/docker/cp-test_multinode-229365-m03_multinode-229365-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 ssh -n multinode-229365-m02 "sudo cat /home/docker/cp-test_multinode-229365-m03_multinode-229365-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.89s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-229365 node stop m03: (2.293975108s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-229365 status: exit status 7 (442.746266ms)

                                                
                                                
-- stdout --
	multinode-229365
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-229365-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-229365-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-229365 status --alsologtostderr: exit status 7 (441.045917ms)

                                                
                                                
-- stdout --
	multinode-229365
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-229365-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-229365-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0318 13:14:11.418199 1140766 out.go:291] Setting OutFile to fd 1 ...
	I0318 13:14:11.418317 1140766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:14:11.418326 1140766 out.go:304] Setting ErrFile to fd 2...
	I0318 13:14:11.418340 1140766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0318 13:14:11.418559 1140766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18429-1106816/.minikube/bin
	I0318 13:14:11.418738 1140766 out.go:298] Setting JSON to false
	I0318 13:14:11.418778 1140766 mustload.go:65] Loading cluster: multinode-229365
	I0318 13:14:11.418913 1140766 notify.go:220] Checking for updates...
	I0318 13:14:11.419250 1140766 config.go:182] Loaded profile config "multinode-229365": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0318 13:14:11.419275 1140766 status.go:255] checking status of multinode-229365 ...
	I0318 13:14:11.419787 1140766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:14:11.419853 1140766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:14:11.435981 1140766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44937
	I0318 13:14:11.436425 1140766 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:14:11.437021 1140766 main.go:141] libmachine: Using API Version  1
	I0318 13:14:11.437041 1140766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:14:11.437448 1140766 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:14:11.437673 1140766 main.go:141] libmachine: (multinode-229365) Calling .GetState
	I0318 13:14:11.439243 1140766 status.go:330] multinode-229365 host status = "Running" (err=<nil>)
	I0318 13:14:11.439263 1140766 host.go:66] Checking if "multinode-229365" exists ...
	I0318 13:14:11.439585 1140766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:14:11.439639 1140766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:14:11.455425 1140766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38881
	I0318 13:14:11.455827 1140766 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:14:11.456303 1140766 main.go:141] libmachine: Using API Version  1
	I0318 13:14:11.456341 1140766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:14:11.456649 1140766 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:14:11.456844 1140766 main.go:141] libmachine: (multinode-229365) Calling .GetIP
	I0318 13:14:11.459367 1140766 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:14:11.459763 1140766 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:14:11.459800 1140766 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:14:11.460010 1140766 host.go:66] Checking if "multinode-229365" exists ...
	I0318 13:14:11.460561 1140766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:14:11.460621 1140766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:14:11.475481 1140766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44441
	I0318 13:14:11.475866 1140766 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:14:11.476355 1140766 main.go:141] libmachine: Using API Version  1
	I0318 13:14:11.476379 1140766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:14:11.476719 1140766 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:14:11.476937 1140766 main.go:141] libmachine: (multinode-229365) Calling .DriverName
	I0318 13:14:11.477162 1140766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:14:11.477202 1140766 main.go:141] libmachine: (multinode-229365) Calling .GetSSHHostname
	I0318 13:14:11.479851 1140766 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:14:11.480307 1140766 main.go:141] libmachine: (multinode-229365) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:cf:2f", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:11:32 +0000 UTC Type:0 Mac:52:54:00:f0:cf:2f Iaid: IPaddr:192.168.39.156 Prefix:24 Hostname:multinode-229365 Clientid:01:52:54:00:f0:cf:2f}
	I0318 13:14:11.480355 1140766 main.go:141] libmachine: (multinode-229365) DBG | domain multinode-229365 has defined IP address 192.168.39.156 and MAC address 52:54:00:f0:cf:2f in network mk-multinode-229365
	I0318 13:14:11.480460 1140766 main.go:141] libmachine: (multinode-229365) Calling .GetSSHPort
	I0318 13:14:11.480676 1140766 main.go:141] libmachine: (multinode-229365) Calling .GetSSHKeyPath
	I0318 13:14:11.480844 1140766 main.go:141] libmachine: (multinode-229365) Calling .GetSSHUsername
	I0318 13:14:11.480968 1140766 sshutil.go:53] new ssh client: &{IP:192.168.39.156 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/multinode-229365/id_rsa Username:docker}
	I0318 13:14:11.569645 1140766 ssh_runner.go:195] Run: systemctl --version
	I0318 13:14:11.577241 1140766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:14:11.594002 1140766 kubeconfig.go:125] found "multinode-229365" server: "https://192.168.39.156:8443"
	I0318 13:14:11.594031 1140766 api_server.go:166] Checking apiserver status ...
	I0318 13:14:11.594070 1140766 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0318 13:14:11.608513 1140766 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1086/cgroup
	W0318 13:14:11.619527 1140766 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1086/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0318 13:14:11.619588 1140766 ssh_runner.go:195] Run: ls
	I0318 13:14:11.624738 1140766 api_server.go:253] Checking apiserver healthz at https://192.168.39.156:8443/healthz ...
	I0318 13:14:11.629631 1140766 api_server.go:279] https://192.168.39.156:8443/healthz returned 200:
	ok
	I0318 13:14:11.629657 1140766 status.go:422] multinode-229365 apiserver status = Running (err=<nil>)
	I0318 13:14:11.629667 1140766 status.go:257] multinode-229365 status: &{Name:multinode-229365 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:14:11.629692 1140766 status.go:255] checking status of multinode-229365-m02 ...
	I0318 13:14:11.629999 1140766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:14:11.630033 1140766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:14:11.646581 1140766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40669
	I0318 13:14:11.647015 1140766 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:14:11.647422 1140766 main.go:141] libmachine: Using API Version  1
	I0318 13:14:11.647446 1140766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:14:11.647843 1140766 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:14:11.648031 1140766 main.go:141] libmachine: (multinode-229365-m02) Calling .GetState
	I0318 13:14:11.649602 1140766 status.go:330] multinode-229365-m02 host status = "Running" (err=<nil>)
	I0318 13:14:11.649618 1140766 host.go:66] Checking if "multinode-229365-m02" exists ...
	I0318 13:14:11.649885 1140766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:14:11.649929 1140766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:14:11.665365 1140766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34061
	I0318 13:14:11.665775 1140766 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:14:11.666255 1140766 main.go:141] libmachine: Using API Version  1
	I0318 13:14:11.666282 1140766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:14:11.666607 1140766 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:14:11.666823 1140766 main.go:141] libmachine: (multinode-229365-m02) Calling .GetIP
	I0318 13:14:11.669422 1140766 main.go:141] libmachine: (multinode-229365-m02) DBG | domain multinode-229365-m02 has defined MAC address 52:54:00:ea:83:02 in network mk-multinode-229365
	I0318 13:14:11.669878 1140766 main.go:141] libmachine: (multinode-229365-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:83:02", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:41 +0000 UTC Type:0 Mac:52:54:00:ea:83:02 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-229365-m02 Clientid:01:52:54:00:ea:83:02}
	I0318 13:14:11.669902 1140766 main.go:141] libmachine: (multinode-229365-m02) DBG | domain multinode-229365-m02 has defined IP address 192.168.39.29 and MAC address 52:54:00:ea:83:02 in network mk-multinode-229365
	I0318 13:14:11.670061 1140766 host.go:66] Checking if "multinode-229365-m02" exists ...
	I0318 13:14:11.670384 1140766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:14:11.670429 1140766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:14:11.684919 1140766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I0318 13:14:11.685303 1140766 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:14:11.685766 1140766 main.go:141] libmachine: Using API Version  1
	I0318 13:14:11.685781 1140766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:14:11.686142 1140766 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:14:11.686316 1140766 main.go:141] libmachine: (multinode-229365-m02) Calling .DriverName
	I0318 13:14:11.686547 1140766 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0318 13:14:11.686566 1140766 main.go:141] libmachine: (multinode-229365-m02) Calling .GetSSHHostname
	I0318 13:14:11.689216 1140766 main.go:141] libmachine: (multinode-229365-m02) DBG | domain multinode-229365-m02 has defined MAC address 52:54:00:ea:83:02 in network mk-multinode-229365
	I0318 13:14:11.689636 1140766 main.go:141] libmachine: (multinode-229365-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ea:83:02", ip: ""} in network mk-multinode-229365: {Iface:virbr1 ExpiryTime:2024-03-18 14:12:41 +0000 UTC Type:0 Mac:52:54:00:ea:83:02 Iaid: IPaddr:192.168.39.29 Prefix:24 Hostname:multinode-229365-m02 Clientid:01:52:54:00:ea:83:02}
	I0318 13:14:11.689664 1140766 main.go:141] libmachine: (multinode-229365-m02) DBG | domain multinode-229365-m02 has defined IP address 192.168.39.29 and MAC address 52:54:00:ea:83:02 in network mk-multinode-229365
	I0318 13:14:11.689830 1140766 main.go:141] libmachine: (multinode-229365-m02) Calling .GetSSHPort
	I0318 13:14:11.690008 1140766 main.go:141] libmachine: (multinode-229365-m02) Calling .GetSSHKeyPath
	I0318 13:14:11.690152 1140766 main.go:141] libmachine: (multinode-229365-m02) Calling .GetSSHUsername
	I0318 13:14:11.690288 1140766 sshutil.go:53] new ssh client: &{IP:192.168.39.29 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18429-1106816/.minikube/machines/multinode-229365-m02/id_rsa Username:docker}
	I0318 13:14:11.768517 1140766 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0318 13:14:11.783043 1140766 status.go:257] multinode-229365-m02 status: &{Name:multinode-229365-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0318 13:14:11.783079 1140766 status.go:255] checking status of multinode-229365-m03 ...
	I0318 13:14:11.783430 1140766 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0318 13:14:11.783475 1140766 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0318 13:14:11.799453 1140766 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33815
	I0318 13:14:11.799852 1140766 main.go:141] libmachine: () Calling .GetVersion
	I0318 13:14:11.800348 1140766 main.go:141] libmachine: Using API Version  1
	I0318 13:14:11.800379 1140766 main.go:141] libmachine: () Calling .SetConfigRaw
	I0318 13:14:11.800697 1140766 main.go:141] libmachine: () Calling .GetMachineName
	I0318 13:14:11.800911 1140766 main.go:141] libmachine: (multinode-229365-m03) Calling .GetState
	I0318 13:14:11.802475 1140766 status.go:330] multinode-229365-m03 host status = "Stopped" (err=<nil>)
	I0318 13:14:11.802487 1140766 status.go:343] host is not running, skipping remaining checks
	I0318 13:14:11.802492 1140766 status.go:257] multinode-229365-m03 status: &{Name:multinode-229365-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.18s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (34.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 node start m03 -v=7 --alsologtostderr
E0318 13:14:30.297580 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-229365 node start m03 -v=7 --alsologtostderr: (33.723969525s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (34.37s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-229365 node delete m03: (1.886263866s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (172.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-229365 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0318 13:24:30.297045 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-229365 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m51.56268054s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-229365 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (172.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-229365
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-229365-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-229365-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (75.860949ms)

                                                
                                                
-- stdout --
	* [multinode-229365-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-229365-m02' is duplicated with machine name 'multinode-229365-m02' in profile 'multinode-229365'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-229365-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-229365-m03 --driver=kvm2  --container-runtime=crio: (46.132759428s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-229365
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-229365: exit status 80 (231.201087ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-229365 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-229365-m03 already exists in multinode-229365-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-229365-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-229365-m03: (1.012593851s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.51s)

                                                
                                    
x
+
TestScheduledStopUnix (117.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-574267 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-574267 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.635565658s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-574267 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-574267 -n scheduled-stop-574267
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-574267 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-574267 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-574267 -n scheduled-stop-574267
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-574267
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-574267 --schedule 15s
E0318 13:31:24.905004 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-574267
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-574267: exit status 7 (81.026792ms)

                                                
                                                
-- stdout --
	scheduled-stop-574267
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-574267 -n scheduled-stop-574267
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-574267 -n scheduled-stop-574267: exit status 7 (75.977552ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-574267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-574267
--- PASS: TestScheduledStopUnix (117.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (200.2s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.604834024 start -p running-upgrade-669181 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.604834024 start -p running-upgrade-669181 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m8.610344063s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-669181 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-669181 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m7.639451399s)
helpers_test.go:175: Cleaning up "running-upgrade-669181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-669181
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-669181: (1.302814785s)
--- PASS: TestRunningBinaryUpgrade (200.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-577305 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-577305 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (96.165248ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-577305] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18429
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18429-1106816/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18429-1106816/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (100.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-577305 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-577305 --driver=kvm2  --container-runtime=crio: (1m40.058375127s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-577305 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (100.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (119.83s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3549557625 start -p stopped-upgrade-847976 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3549557625 start -p stopped-upgrade-847976 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m7.470449178s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3549557625 -p stopped-upgrade-847976 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3549557625 -p stopped-upgrade-847976 stop: (2.122836317s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-847976 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-847976 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.233150026s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (119.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (39.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-577305 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-577305 --no-kubernetes --driver=kvm2  --container-runtime=crio: (38.587347404s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-577305 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-577305 status -o json: exit status 2 (290.090654ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-577305","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-577305
E0318 13:34:30.296784 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-577305: (1.114497746s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (39.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-577305 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-577305 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.404581651s)
--- PASS: TestNoKubernetes/serial/Start (29.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-577305 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-577305 "sudo systemctl is-active --quiet service kubelet": exit status 1 (227.72688ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (15.037447522s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-577305
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-577305: (1.533541105s)
--- PASS: TestNoKubernetes/serial/Stop (1.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (23.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-577305 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-577305 --driver=kvm2  --container-runtime=crio: (23.943708273s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (23.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-577305 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-577305 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.162453ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-847976
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-847976: (1.165543208s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.17s)

                                                
                                    
x
+
TestPause/serial/Start (119.55s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-760389 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E0318 13:35:53.349925 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-760389 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m59.554216748s)
--- PASS: TestPause/serial/Start (119.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (156s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-537236 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0318 13:39:30.297637 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-537236 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m35.997967615s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (156.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (62.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-173036 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-173036 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m2.338544846s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (62.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (113.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-569210 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0318 13:41:24.905359 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-569210 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m53.647465747s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (113.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-173036 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f8a2ab77-fe6f-4142-a03a-fbeae8ba017b] Pending
helpers_test.go:344: "busybox" [f8a2ab77-fe6f-4142-a03a-fbeae8ba017b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f8a2ab77-fe6f-4142-a03a-fbeae8ba017b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.005660705s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-173036 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-173036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-173036 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.224279055s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-173036 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-537236 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5e7a25de-518d-4193-a6cd-184c3ce56d2d] Pending
helpers_test.go:344: "busybox" [5e7a25de-518d-4193-a6cd-184c3ce56d2d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5e7a25de-518d-4193-a6cd-184c3ce56d2d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004825742s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-537236 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-537236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-537236 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.031836411s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-537236 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-569210 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e34c4a2f-e8d9-4595-8653-562d5b635c11] Pending
helpers_test.go:344: "busybox" [e34c4a2f-e8d9-4595-8653-562d5b635c11] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e34c4a2f-e8d9-4595-8653-562d5b635c11] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 13.004594223s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-569210 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-569210 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-569210 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.013851017s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-569210 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (690.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-173036 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-173036 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (11m30.188735031s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-173036 -n embed-certs-173036
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (690.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (621.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-537236 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-537236 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m21.453229792s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-537236 -n no-preload-537236
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (621.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (6.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-909137 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-909137 --alsologtostderr -v=3: (6.308881118s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (6.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-909137 -n old-k8s-version-909137: exit status 7 (81.736677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-909137 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (586.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-569210 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0318 13:46:24.906057 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 13:49:30.297130 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 13:51:24.905625 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 13:52:33.350498 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
E0318 13:52:47.954963 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/functional-377562/client.crt: no such file or directory
E0318 13:54:30.297776 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-569210 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m46.502029694s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-569210 -n default-k8s-diff-port-569210
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (586.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-572909 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0318 14:09:13.350997 1114136 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18429-1106816/.minikube/profiles/addons-015389/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-572909 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (58.201328788s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-572909 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-572909 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.240279369s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-572909 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-572909 --alsologtostderr -v=3: (12.396370648s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-572909 -n newest-cni-572909
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-572909 -n newest-cni-572909: exit status 7 (90.025798ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-572909 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-572909 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-572909 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (39.193154099s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-572909 -n newest-cni-572909
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-572909 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-572909 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-572909 -n newest-cni-572909
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-572909 -n newest-cni-572909: exit status 2 (289.960586ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-572909 -n newest-cni-572909
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-572909 -n newest-cni-572909: exit status 2 (296.164182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-572909 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-572909 -n newest-cni-572909
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-572909 -n newest-cni-572909
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                    

Test skip (37/271)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
133 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
135 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
136 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
138 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
139 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
274 TestStartStop/group/disable-driver-mounts 0.15
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-173866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-173866
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard